[foms] Proposal: adaptive streaming using open codecs

Jeroen Wijering jeroen at longtailvideo.com
Tue Oct 26 03:41:39 PDT 2010


On Oct 21, 2010, at 9:22 PM, Christopher Blizzard wrote:

>>>>>> Again, the proposal from Christopher on providing a "Manifest API" (basically a playlist of chunks) plus having some QOS metrics (bandwidth, framedrops) would already allow developers to build adaptive streaming on the javascript level. Far easier for a first implementation. I guess we swiftly need a proposal for the "Manifest API".
>>>>> Note that one of Philip's suggestion's (maybe not on the list? I can't remember.) was that we do the API before we do the manifest work.  This would allow us to iterate, test and figure out what worked before figuring out what we needed in the manifest.
>>>> Yes, that was Philip's proposal as well. Makes a lot of sense.
>>>> 
>>>> - Jeroen
>>> 
>>> Also would allow us to test out switching algorithms that we might want
>>> to include in browsers by default.  And (*gasp*!) specify them.
>>> 
>>> --Chris
>> 
>> I support this message :)
>> 
>> In some way or another, we need to achieve gapless playback. These are the options I know of so far:
>> 
>> 1. A concatenation API (maybe Stream) to form a single stream from multiple URLs. This would basically be a byte concatentation API, and assumes that we either have the chunks be plain slices or that we support chained Ogg/WebM gaplessly. It has some similarity to a Manifest API in that it lists several URLs. The difference may be that the video element isn't aware of the multiple resources, that's all hidden in the URL, effectively made part of the network layer of the browser.
>> 
> 
> Basically an API that says "Play this chunk of video next"?  I think that's what I've pushed for, but it's a decent amount of work.  I'm not sure what the rules are for that esp. wrt sound sync.  Also I don't think it has to be byte-concatination if we have decent support for moving from one video to the next on a frame-by-frame basis.

I have added a small section on this to the proposal I drafted. I also posted it up on the WhatWG wiki:

http://wiki.whatwg.org/wiki/Adaptive_Streaming#API_adaptive_streaming

Please feel free to add/edit/remove as you see fit. There's still a lot of wrong statements in there, or omissions of feedback or alternatives. I added in a bunch based off the emails over last week, but some sections (particularly around chaining/chunking and to-rangerequest-or-not) as still very weak. 

On the audio concatenation: can the suggestion that Monty put forward in the workshop (making up additional sound data in Vorbis e.g. for a crossfade) also be used for other codecs? Or is this something that can only be done in Vorbis? 

Chris' idea on the video concatenation sounds good - this can be on a frame-by-frame basis. I presume one then can still used only one decoding pipe? Or is that an issue then?

Kind regards,

Jeroen


More information about the foms mailing list