[Theora-dev] 16 bits, cast on idct function
Felipe Portavales Goldstein
portavales at gmail.com
Tue May 30 23:07:03 PDT 2006
Hi all,
Just a stupid question
The IDctSlow function on file idct.c has this line :
ip[0] = (ogg_int16_t)((_Gd + _Cd ) >> 0);
The ip[0] , _Gd and _Cd are of type ogg_int32_t
My question is:
The result of (_Gd + _Cd) can be a number with more than 16 bits ?
(yes, it can be because they are int32, but the algorithm could
guarantee something about that... I dont know...)
If can, the cast (ogg_int16_t) will truncate the number to the 16 less
significant bits, and will get a wrong result...
the ip[0] is 32 bits, so, why truncate to 16 bits ?
But I'm realy confused with the >> 0 ,
This shift right zero can do something or someone just forgot to delete it ?
Thanks
-- Felipe
More information about the Theora-dev
mailing list