[CELT-dev] adding celt support to netjack some questions.
jean-marc.valin at usherbrooke.ca
Mon Nov 24 16:49:49 PST 2008
Just a few minor precisions/corrections...
Gregory Maxwell a écrit :
>> i currently dont require robustness against packet loss,
>> because the sync code of netjack does not handle packet loss very
>> gracefully. how much bandwidth is wasted for this feature ?
> It's inherent in the format and not deactiavtable. It costs only a
> very small amount. You want it anyways, more below.
Basically, the robustness to packet loss comes from refraining from
using excessive inter-frame prediction. The cost is probably in the
order of 1% in rate and as Greg said, it can't be removed.
> The CELT library officially supports integer sample rates between
> 32,000 Hz and 64,000 Hz. If your jack is running at higher speeds you
> will need to resample externally to CELT.
Actually, support for > 64 kHz would be relatively easy to do, although
there would be no point in doing it other than convenience since the
encoding would still stop at 20 kHz.
>> the signal data is obtained from individual jack ports.
>> i would need one additional step to make the frames interleaved.
>> how much bandwidth would i save if i was using one encoder, for
>> all channels, instead of n encoders for n channels ?
>> considering the signals are not really correlated.
> You should run N encoders. Pack their output into a single packet. You
> will save considerable bandwidth from the packing (At typical CELT
> latencies IP+UDP+etc overhead is significant).
Yes, N encoders is the way to go. Don't use stereo encoding for
unrelated signals (i.e. if you don't want cross-talk).
> The CELT encoder currently supports stereo and mono modes. The stereo
> mode is incomplete. In my own app I have a separate add stereo add
> mono mode. I think at this point it's overkill.
Could you translate that to English? :-)
> Yes. This is a property of the robusness to packet loss. It will take
> 'several' packets to synchronize and produce decent quality. Note:
> CELT never resyncronizes perfectly. So if you compare two decoders one
> which started before the encoder and one which started after their
> output will probably never be identical. The difference should not be
> perceptible in any case.
Just to clarify things, the re-synchronisation is exponential, so after
a few frames, the difference is really negligible. After not that long,
the error would become even smaller than the numerical rounding error.
Just be aware that there may be glitches at the point where you start
from the middle of the stream.
> CELT_GET_LOOKAHEAD should tell you the additional latency beyond a
> single frame. but CELT delay is always 1.5x the frame_size today, I do
> not expect this to change.
Not only it's likely to change, but it never was the case. The delay is
often 1.5x the frame size, but not for all frame sizes. For instance, at
512-sample frames, the look-ahead is 128 samples, so the total delay is
1.25x frame size.
> You can select any even frame size, but power-of-two are recommended
> (sizes with large prime factors have reduced quality right now).
> Since you should probably make the CELT frame size either be equal to
> or an integer factor of the Jack frame size in order to reduce latency
> sticking to power-of-two CELT frames should be acceptable for your
Actually, I think you're still OK if you have a factor of 3 or 5 in the
frame size, but large prime factors are indeed bad -- both for quality
and for performance.
More information about the celt-dev