[CELT-dev] adding celt support to netjack some questions.

torbenh at gmx.de torbenh at gmx.de
Tue Nov 25 18:11:33 PST 2008


On Tue, Nov 25, 2008 at 07:14:05AM -0500, Jean-Marc Valin wrote:

> Why not use a jitter buffer along with a smaller audio buffer. If you
> look at the celtclient code (in the main repo under tools/), I use a
> jitter buffer with just 256 sample audio buffer (could be made even
> smaller). That would also help you handle lost/reordered/late packets
> correctly. Also, it means that you can handle any amount of jitter
> (adaptively) without requiring a large soundcard buffer.

ok. thanks for the lesson on IRC. after sleeping over it, i am fully
convinced :D

I hope you tolerate that after a sleepless night i needed some
hours to understand the problem. I am new to this field.

My plan is to add a jitter buffer to the netjack components, not
alsa_out, because alsa_out has several other use-cases which
do not contain reordering, but only relatively small timing jitter,
based on the CPU usage of the programs in the jack_graph.

Lets leave alsa_out out of the picture for now.

then we only have this:

soundcard irq
     |
(cpu usage jitter of jack-apps)
     |
jack_netsource
     |
(packet over internet)
     |
-> jitter_buffer
remote_jackd
(cpu usage jitter of whole remote jack graph)
     |
(reply packet over internet)
     |
-> jitter_buffer
jack_netsource at a later cycle.
(soundcard irq*N + cpu usage jitter)


the cpu jitter on the local machine can be compensated exactly with
timestamps.
the remote cpu jitter is measured in the clock, which i would like
to measure and drift compensate on the local end.

i have now understood what all this split approximation is about.
the local end is almost easy. i send out a packet, and i ask the
jitterbuffer for the packet i am interested in.
(this is in fact already implemented, i need to look over it once more,
 because that code was mainly intended to reassemble fragmented packets,
 and i am not sure if isnt just doing shortcuts for the non-fragmented
 case)

But basically the packet of interest is there or not.
I call packets not being there network xruns.

The program needs to decide if it wants to increase the allowed
roundtrip latency based on the xrun count.

However changing the roundtrip latency is quite expensive, due to
jack-transport constraints. But its only expensive in one direction.
urgs... blocked thoughts. need to paint.

The second problem is deciding when a packet is too late or lost
on the remote end.

This decision is affected by:
- CPU time needed to calc the response. ( <T_maxCycle, ~T_lastCycle )
- allowed roundtrip latency (can be assumed constant)
- net link up
- net link down

ok... so i need to quantify the net
its latency and jitter, while jitter is more than one number.

i guess, i am almost ready to look at your jitter buffer, and understand
it ;D ... although from what i have seen on irc the jitterbuffer is actually
looking at the speex/celt bitstream.

this stuff is beginning to boil down to control theory.


> BTW, what you are doing (syncing clocks across a net connection) is
> similar to what PulseAudio does with network audio. Maybe there's
> code/knowledge to share here.

will have a look.
thanks for pushing me onto the right track.
much apreciated, as it was necessary to break my ignorance somehow.



-- 
torben Hohn



More information about the celt-dev mailing list