[Speex-dev] New jitter.c, bug in speex_jitter_get?

Jean-Marc Valin jean-marc.valin at usherbrooke.ca
Wed May 3 18:54:44 PDT 2006


> Perhaps, but then you need to assume that the jitterbuffer can just  
> throw away the data, and that limits how you can use it.  In object- 
> oriented terms, you might want to pass objects to the JB, and then  
> call a destructor on them.  In C terms, you may want to allocate  
> frames via malloc(), and then call free() on them later.  You might  
> want to pass in reference-counted objects of some sort, etc.

Are we talking about the same thing here? I'm talking about IP packets,
more specifically datagrams. These contain a bunch of bytes and a
length. With RTP you get a timestamp as well. That's all. No object
oriented stuff until you decode them (which is done after it leaves the
jitter buffer).

> Mainly they're different because you don't ever want the jitterbuffer  
> to throw them away -- you always want to deliver them.  They probably  
> have zero duration (are impulses), and will overlap in timestamps  
> with the audio frames.  You may not want to consider them in your  
> jitter calculations.

Depending on how the control stuff works, you probably don't *need* a
jitter buffer in the first place. At best you'll want to reorder the
data, no?

> > Also, why would you want to give it structs? AFAIK, IP packets
> > can only contain bytes anyway.
> 
> Of course.  But, in the way I've used the JB, and I would imagine in  
> most cases, the application which uses it is going to be parsing the  
> network stuff before putting it into a JB, and would put it into a  
> structure or object.  Clearly, everything is just bytes, and you  
> could do something similar with your JB api by passing in pointers  
> and len==4, _if_ your jitterbuffer didn't have the ability to just  
> drop frames internally.

Why would you parse and do work *before* putting it in the jitter
buffer, especially not even knowing whether you'll actually use it.

> The time in my implementation doesn't need to be wall time, nor do  
> timestamps;  They're all relative to each other, and the beginning of  
> the "session".   I think everything would work OK +- some constants  
> if the scale were different.

But why do you need that time in the first place?

> > Why no overlap? What if you want to include a bit of redundancy  
> > (doesn't
> > have to be 100% either) to make your app more robust to packet  
> > loss? You
> > could want to send a packet that covers 0-60 ms, followed by 40-100  
> > ms,
> > followed by 80-140 ms, ...
> 
> I see now.  I hadn't considered this, but it could also be expressed  
> as a sequence of 20ms frames, some of which are dups, and some which  
> have identical arrival times.  I'm not sure how my implementation  
> would handle this, but I don't think it breaks the API.

What do you mean expressed as a sequence. You mean you'd break the
frames down before sending them? Sounds complicated and even technically
impossible for the general case (what if the frames *can't* be broken
down for a particular codec?).

> > Well, that API clearly has limitations that mean I can't use them  
> > to do
> > what I need. Unless you're willing to change that (and even then  
> > I'm not
> > sure), there's no way we can use the same API. I still suspect it  
> > may be
> > possible to wrap by current API in that API. Of course, some features
> > would just not be available.
> 
> I think it would, except that your API lets the jb destroy data on  
> it's own, which would be bad, for example, if the data was a control  
> frame, or in every case, because frames are usually malloced.

What's the problem about the jitter buffer destroying control frames. If
you need to send them reliably, don't use UDP in the first place and
don't use a jitter buffer.

> Yours may indeed be better than mine, but before you say it won't get  
> confused, let's see what happens if it gets into asterisk and a lot  
> of real-world broken streams get thrown at it :)

Of course, I'm always interested in more testing. However, I've already
(voluntarily and especially involuntarily) abused it nonsensical data
and I have yet to see it fail (i.e. go into an irrecoverable state). 

> What really would help in the long run is if we had some kind of test  
> harness to run these things in, and good test data culled from real- 
> world situations.   I had some hacky tools like this I used when I  
> built my implementation, but nothing really good.

Sure. I guess it comes down to collecting (recording timestamps and all)
data from real applications in real scenarios. Then you figure out the
"right" behaviour and compare with what you obtain.

	Jean-Marc



More information about the Speex-dev mailing list