[foms] Proposal: adaptive streaming using open codecs

Christopher Blizzard blizzard at mozilla.com
Thu Oct 21 12:22:25 PDT 2010


On 10/21/2010 12:43 AM, Philip Jägenstedt wrote:
> On Wed, 20 Oct 2010 20:52:10 +0200, Christopher Blizzard 
> <blizzard at mozilla.com> wrote:
>
>> On 10/20/2010 11:46 AM, Jeroen Wijering wrote:
>>> On Oct 20, 2010, at 8:45 PM, Christopher Blizzard wrote:
>>>
>>>> On 10/20/2010 5:24 AM, Jeroen Wijering wrote:
>>>>> Again, the proposal from Christopher on providing a "Manifest API" 
>>>>> (basically a playlist of chunks) plus having some QOS metrics 
>>>>> (bandwidth, framedrops) would already allow developers to build 
>>>>> adaptive streaming on the javascript level. Far easier for a first 
>>>>> implementation. I guess we swiftly need a proposal for the 
>>>>> "Manifest API".
>>>> Note that one of Philip's suggestion's (maybe not on the list? I 
>>>> can't remember.) was that we do the API before we do the manifest 
>>>> work.  This would allow us to iterate, test and figure out what 
>>>> worked before figuring out what we needed in the manifest.
>>> Yes, that was Philip's proposal as well. Makes a lot of sense.
>>>
>>> - Jeroen
>>
>> Also would allow us to test out switching algorithms that we might want
>> to include in browsers by default.  And (*gasp*!) specify them.
>>
>> --Chris
>
> I support this message :)
>
> In some way or another, we need to achieve gapless playback. These are 
> the options I know of so far:
>
> 1. A concatenation API (maybe Stream) to form a single stream from 
> multiple URLs. This would basically be a byte concatentation API, and 
> assumes that we either have the chunks be plain slices or that we 
> support chained Ogg/WebM gaplessly. It has some similarity to a 
> Manifest API in that it lists several URLs. The difference may be that 
> the video element isn't aware of the multiple resources, that's all 
> hidden in the URL, effectively made part of the network layer of the 
> browser.
>

Basically an API that says "Play this chunk of video next"?  I think 
that's what I've pushed for, but it's a decent amount of work.  I'm not 
sure what the rules are for that esp. wrt sound sync.  Also I don't 
think it has to be byte-concatination if we have decent support for 
moving from one video to the next on a frame-by-frame basis.

> 2. Have each chunk in its own <video> and add a synchronization API. 
> The main use case for this is synchronizing external audio tracks, but 
> as a side-effect one could allow synchronizing two clips at their 
> edges. My assumption is that we will want such a sync API eventually 
> anyway. However, it's not a terribly obvious way of thinking about 
> gapless playback. Also, it would require switching the video elements 
> at the exact right time, and <track> elements would have to be 
> duplicated...
>
> Does anyone have opinions on either of these approaches? Are there 
> others?
>

I feel like this is a separate thing entirely?

--Chris


More information about the foms mailing list