[Flac-dev] Re: 0.9 problems
Matt Zimmerman
mdz at debian.org
Sat May 19 15:42:22 PDT 2001
On Sat, May 19, 2001 at 08:03:54PM +0000, Christian Weisgerber wrote:
> Matt Zimmerman <mdz at debian.org> wrote:
>
> > 0.9. As I said, I was using an 8-bit sample,
>
> Ah, that didn't quite register with me. I'm using a CD-style
> 44.1kHz/stereo/16-bit test file.
I repeated my test with a 44.1kHz/stereo/16bit file. I get a floating point
exception when encoding at fixed.c:84. Decoding works fine for me.
The floating point exception occurs here:
84 residual_bits_per_sample[0] = (real)((data_len > 0) ? log(M_LN2 * (real)total_error_0 / (real) data_len) / M_LN2 : 0.0);
At the point of failure, the part inside the log() is zero, so the result of
the log is -inf. On i386, dividing -inf / M_LN2 seems to evaluate to zero,
while on alpha it gives a floating point exception. Here's a small test
program:
#include <math.h>
int main() {
double foo = log((double)0);
printf("%e\n", foo);
printf("%e\n", foo / M_LN2);
return 0;
}
I think this could be fixed by changing the (data_len > 0) test to be (data_len
> 0 && total_error_X > 0).
--
- mdz
More information about the Flac-dev
mailing list