[CELT-dev] llcon software using CELT
v.fischer at nt.tu-darmstadt.de
Thu Aug 20 22:48:45 PDT 2009
Gregory Maxwell schrieb:
> On Fri, Aug 21, 2009 at 1:12 AM, Gregory Maxwell<gmaxwell at gmail.com> wrote:
>> On Fri, Aug 21, 2009 at 1:07 AM, Volker<v.fischer at nt.tu-darmstadt.de> wrote:
>>> Gregory Maxwell schrieb:
>>>> Hm? If you call celt_decode with a null pointer in place of the data
>>>> it should fade out the audio after consecutive losses.
>>>> The relevant code is around line 1286 in celt.c:
>>>> for (i=0;i<C*N;i++)
>>>> freq[i] = ADD32(EPSILON, MULT16_32_Q15(QCONST16(.9f,15),freq[i]));
>>> If I interpret your code correctly, you use an exponential decay for the
>>> fade out. In previous software projects I did something similar and got
>>> strange effects when I applied a multiplication to very small floating point
>>> values. I guess the same happens here, too. You should introduce a bound for
>>> the floating point values. If the signal is below the bound, set the
>>> floating point value to zero and the problems should disapear (I guess ;-)
>> The addition of EPSILON prevents the creation of denormals and is more
>> efficient than the compare and branch required for zeroizing.
> This makes me think however.. are you applying any gain control the
> the output which might be making a quiet tone loud?
I finally understand the code you posted above :-). I was a bit confused
by the ADD32 operation but with your explaination it makes sense now.
Putting your explaination as a comment in the code would maybe help
In llcon, I do not apply any gain to the decoded audio signal. The mono
signal is just copied in both stereo channels of the sound card and then
played by the sound card.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the celt-dev