[xiph-rtp] Re: new RTP development list

Ralph Giles giles at xiph.org
Wed Oct 20 21:01:40 PDT 2004


On Wed, Oct 20, 2004 at 02:12:37PM -0700, Aaron Colwell wrote:

> I'll try to give answers for all your comments. I realize that some of these 
> arguments may be weak. My main purpose for caring about RTP delivery in the 
> first place is so that I can provide the most complete solution for our 
> server. That means I want to be able to stream any ogg file over RTP that 
> can currently be played via HTTP. This allows the user to determine what 
> delivery method works best for them instead of the content determining it 
> for them.

I guess the root om my objection to your arguments here is that it feels like
you're rigging the requirements to match your application rather than the
other way around.

> Windows media uses MMS their own proprietary protocol which is basically the
> same as RTSP & RDT (or RTP) combo.

Ah, thanks.

> Part of this may be historical reasons. Various file formats are suboptimal for
> HTTP delivery becuase vital information is at the end of the file. Ogg has
> this problem as well because you can't know the length of the file without 
> seeking to the end. HTTP 1.1 with byte-range support has helped with this 
> problem a little, but for a long time 1.1 support was rare and/or incomplete.

Yes. We felt like we fixed all of these. There's of course no reason for id3
tags to be at the end either, it's just a stupid format. Ogg of course goes
further the other way as a pure streaming format: there's no two modes about
it.

I guess you want the total length so you can provide a playback progress/seek 
bar? We could also resolve that with a metadata field, I imagine.

>                             Streaming , at least in the case for Real,
> allows rate adaptation that reacts to current network conditions. If loss
> starts occuring the client can switch to a new rate and recover the loss.
> This sort of behavior is not easy to implement using HTTP.

Because of the higher latency?

> Streaming and download are different if you consider multirate files. For 
> single bitrate files the different between these 2 cases is minimal. For
> multirate files downloading would cause you to get tons of data that you
> don't really want. You can argue that the author should just create files for
> each bitrate. That is fine, but introduces yet another bit of maintainence.

I thought the Real system just created multirate files and the client switched
between them. Otherwise you're talking about bitrate peeling, which I thought
no one had really gotten to work.

> > > - Reencoding on the fly for simulated live streams is not a scaleable solution.
> > >   What if you wanted to support a ton of simulated live streams and the
> > >   streams were constructed from a library of content that has different 
> > >   encodings. You would need a lot of CPU to reencode all the clips in realtime.
> > >   I realize that in most cases that the content will be relatively uniform,
> > >   but in the case where tunings happen over time, newer content may have 
> > >   different codebooks. You don't want to have to reencode your whole library
> > >   or reencode on the fly to handle this.
> > 
> > Of course we're making a trade off. If CPU for reencoding is an issue, cache
> > the files. One of our design philosphies has always been to move complexity
> > from the decoder to the encoder. We believe this helps with adoption as well
> > as globally optimizing resource use. So that's part of it.
> 
> If you wanted to move complexity from the decoder to the encoder then you 
> shouldn't have allowed chaining in the first place. It's not like removing the
> possibility of chaining here makes things any easier for a client that plays
> back files via HTTP or locally.

Well, that's a fair enough comment. In all the players I've written, chaining
support amounts to an outer while loop. I guess I've not tried it with seeking
though.

> > For maximum quality you want to periodically re-encode from masters anyway.
> > This is actually a requirement of the statutory webcasting license under the
> > DMCA in the US (to make sure you own the CDs) but I guess all that expires
> > in January.
> 
> I'd be impressed if people actually do this. That would be quite time 
> consuming. Why force this kind of process on a user if it isn't really 
> necessary?

As you suggest it is hard to enforce. Anyway, a weak counter argument.

> > Anyway, codebook tranmission has always been an issue with the Vorbis codec 
> > under RTP. Limiting a session to a single set that can be sent as part of 
> > the SDP greatly simplifies things. I think that's a worthwhile purchase.
> > 
> 
> I have no problem with a no chaining allowed mode. I figured that the chaining
> would be signaled in the SDP so that a client that doesn't support it would
> know that it shouldn't try to stream the file.

I'd rather we didn't have optional parts of the spec.

> [bandwidth usage of the new codebooks]
> 
> There are 2 scenarios I see to solve this. If we are dealing with on-demand,
> the server knows what all the codebooks are so it can send the http URLs and
> MD5 hash for each of the codebooks as early as possible. The client can 
> download the codebooks when they want. You might want to put a timestamp
> indicating when the codebook is needed to help the client schedule the 
> downloads. Hopefully the common case will be that there are only a few 
> codebooks to worry about so there won't be a lot to download.

Well, you'd either need a list marked by timestamp ahead of time, or to include
a 'chain segment id' in every data packet analogous to the Ogg serial number
so the decoder knows when to change codebooks. Is there any reasonable way
to pass a data structure like that over SDP?

Anyway, I like the idea of using RTCP to start a new RTP session for chaining.
Is that something that can reasonably be done gapless?

> whew... I think I addressed everything.

Whew indeed. Pretty much, :-)

 -r



More information about the xiph-rtp mailing list