wessel at lubberhuizen.nl
Thu Oct 15 08:05:25 PDT 2009
I'm trying to compile with the settings below, for a target that has no
floating point support whatsoever.
Compilation fails because there are constants #defined in arch.h that
appear to be float, e.g.
#define DB_SCALING 256.f
#define DB_SCALING_1 (1.f/256.f)
DB_SCALING_1 appears not to be used. Is it safe to remove the .f?
Jean-Marc Valin wrote:
> Note that for fixed-point to work, you need to define the following:
> #define FIXED_POINT
> #define DOUBLE_PRECISION
> #define MIXED_PRECISION
> otherwise you'll get strange results.
> Quoting Gregory Maxwell <gmaxwell at gmail.com>:
>> On Tue, Sep 15, 2009 at 5:12 AM, Elston Sa <jose at rebaca.com> wrote:
>>> I have build celt with FIXED_POINT option (latest 0.6.1 as well as from the
>>> git repo) on windows. However I am not getting a valid output (all samples
>>> are saturated) when I try to decode with this version. Â The input file was
>>> encoded with the same fixed point version. Does fixed point version work at
>> Yes, fixed point it tested in parallel with floating point. Since you
>> say you're building on windows, is it possible that you haven't setup
>> the correct defines for fixed point mode? Or is this a cygwin build
>> configured through autotools?
>>> Following are the command line settings;
>>> Samplerate; 48000
>>> Channels: 2
>>> Framesize: 256
>>> Bytesperpacket: 1024
>>> Complexity: 10
>> You're asking CELT for 1.536Mbit/sec output, it's also possible that
>> you're triggering a fixed-point specific bug. I don't to regular
>> testing at rates that high. Does it work at more typical rates such
>> as 128 bytes per packet?
>> celt-dev mailing list
>> celt-dev at xiph.org
> celt-dev mailing list
> celt-dev at xiph.org
More information about the celt-dev