[foms] Proposal: adaptive streaming using open codecs

Mark Watson watsonm at netflix.com
Fri Nov 5 09:53:43 PDT 2010

On Nov 4, 2010, at 9:13 PM, Chris Pearce wrote:

>  Sorry for coming late to the party as well, I've been busy with 
> Firefox 4 blockers, but have caught up with all these threads today. I'm 
> on Mozilla's <video> team.
> On 4/11/2010 11:26 p.m., Philip Jägenstedt wrote:
>> On Fri, 29 Oct 2010 15:47:58 +0200, Timothy B. Terriberry
>> <tterribe at xiph.org>  wrote:
>>>> It does require more work from browser developers though, so not sure
>>>> if this would be preferred.
>>> This is something we've talked about doing, and I there are a lot of
>>> reasons to want to be able to do something like this, but the
>>> implementation is also very complicated. We get faces filled with horror
>>> from the New Zealand folks every time it's brought up. I'm sure Philip
>>> would agree.
>> Yes, it's something that goes against the current architecture with
>> completely independent decoding pipelines for each<video>. However, I'm
>> not sure if we can avoid doing this eventually, as it may be required for
>> accessibility (sign-language video and audio descriptions). The jury is
>> still out.
> Muxing on the client seems to make sense to enable adaptive streaming, 
> so funny faces aside, we may have to bite the bullet and end up 
> implementing it for that reason, in addition to the reason Philip 
> outlines above.
> Making the stream-switching logic optionally overridable in JS seems 
> reasonable, given that the different big video sites would likely want 
> to customize their buffering logic. It must work without custom logic in 
> JS of course.
> I think some kind of model where the decoding pipeline gets passed 
> keyframe-aligned byte-ranges from possibly different resources seems 
> reasonable. We'd probably not want the media data from the chunks to be 
> exposed to JS, we'd be better off passing "handles" to chunks around in 
> JS instead.

What do you think of the scheme that Jeroen proposed where the "handles" are ( URL, byte range ) pairs ?

I think once you pass the ( URL, byte range ) pair over the video tag to the player you don't need to refer to it again. You do need to associate each ( URL, Byte Range ) with some context which you previously initialized by passing the ( URL, Byte Range ) for the file headers (Header/Segment/Track for WebM, Moov box for MP4) and ideally you need to do it independently for Audio, Video, Subtitles. Also we'd need to work out how to efficiently expose the index information (which contains the byte range to time range mapping) to the JS. 

If there are browser implementors interested in pursuing this route for mp4 files as well as WebM, then we'd be really interested in working together to see if we could get some version of the Netflix service working in that context.


> Chris Pearce.
> _______________________________________________
> foms mailing list
> foms at lists.annodex.net
> http://lists.annodex.net/cgi-bin/mailman/listinfo/foms

More information about the foms mailing list