[foms] Adaptive streaming

Mark Watson watsonm at netflix.com
Sun Oct 24 16:15:47 PDT 2010


All,

Firstly, thanks to Sylvia for directing me towards this list. I am at Netflix, looking at how our service could eventually be delivered to standards-based adaptive streaming players. We're keen to make sure the emerging standards/open solutions have the capabilities that would be needed for that. We have been doing HTTP adaptive streaming on quite a large scale for a few years now with entirely proprietary technology, but in the future it is more important to us to make it easier to get our service onto more devices more easily than to keep that technology to ourselves. We'd love to see an open, high-quality adaptive streaming solution are are willing to help make that happen. 

Reading the "Proposal: adaptive streaming" thread in the archives, I have a couple of comments.

Firstly, it isn't really necessary to split content into physical chunks for on demand services. And there are some real disadvantages to doing that. We have found the "granularity of request" needs to be of the order of 2s to adapt fast enough when conditions change. 10s is too long. Storing content in 2s chunks results in lots of files (it would be literally billions, for us, considering the size of our library).

The alternative is use of HTTP Range requests, with which we've had no problems in terms of support in devices, servers and CDNs. Store the movie as a single file accompanied with a compact index which enables clients to form range requests for arbitrary sized pieces, down to a single GoP. This also has the advantage that client requests do not need to always be the same size (in time).

As the proposal says, server side logic could translate client "chunk" requests into byte ranges, but to be efficient this process needs to be understood by caches as well as origin servers: CDN caches can (and do) prefetch the "next" part of a file following a range request, which they won't do if they just see individual chunks. It's good if the solution can work with existing HTTP infrastructure

This approach also keeps the manifest compact: if the manifest has to list a separate URL for every GoP it can get quite large with a 2h piece of content. Even after gziping, the size is sufficient to affect startup time (any system being designed now should be targeting ~1s startup, IMO).

Another important factor is separation of audio, video and subtitle streams. The number of combinations gets pretty large with only a few audio/subtitle languages and video bitrates.

We've been working in MPEG on the DASH standard which just reached the "Committee Draft" milestone. Unlike traditional MPEG work items there are a core group of participants who understand that this needs to be done quickly and without excessive complexity (otherwise we probably wouldn't be interested). It is more complex than m3u8, but it supports a lot more features, not all of which are unnecessary ;-) We expect to see a simple profile defined that cuts out the more esoteric stuff.

I wondered what the opinion of the group here was on that work ?

Attached is a quick example of what a DASH manifest for an on-demand service might look like. For those interested in all the details, the mailing list is public: http://lists.uni-klu.ac.at/mailman/listinfo/dash and the (almost final) output of the recent meeting is in the archives at http://lists.uni-klu.ac.at/mailman/private/dash/2010-October/000861.html.

Best,

Mark Watson

-------------- next part --------------
A non-text attachment was scrubbed...
Name: DashExample.xml
Type: application/xml
Size: 2889 bytes
Desc: DashExample.xml
Url : http://lists.annodex.net/cgi-bin/mailman/private/foms/attachments/20101024/ff8cdfea/attachment.xml 
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: ATT00001..txt
Url: http://lists.annodex.net/cgi-bin/mailman/private/foms/attachments/20101024/ff8cdfea/attachment.txt 


More information about the foms mailing list