[foms] Proposal: adaptive streaming using open codecs

Jeroen Wijering jeroen at longtailvideo.com
Wed Oct 27 02:54:49 PDT 2010


On Oct 27, 2010, at 11:19 AM, Philip Jägenstedt wrote:

>> Basically an API that says "Play this chunk of video next"?  I think that's what I've pushed for, but it's a decent amount of work.  I'm not sure what the rules are for that esp. wrt sound sync.  Also I don't think it has to be byte-concatination if we have decent support for moving from one video to the next on a frame-by-frame basis.
> 
> The difference is if the concatenation is in terms of bytes or in terms of media resources. The manifest idea would seem to allow concatenating arbitrary media resources, which I think is far for difficult to implement, as it requires the media framework layer to be aware of each chunks, while byte concatenation allows it to treat it as an infinite stream.

I wonder how this currently works in Adobe Flash. In Flash, there's also a video.appendBytes() function. However, this will work fine if you feed it a chunk of data that is not a continuation of the current stream. As long as the A/V codecs remain the same, nothing happens. And it really seems they have only one decoding pipe open for this.

Perhaps somebody knows more about this? The way they did it seems like a nice tradeoff between internal complexity (one decoding pipe) and external flexibility (need the same codecs, but no continuous stream, not the same samplefrequency / # channels and not the same FPS / framesize).

A nice example of how this works on the actionscript level is the MKVLoader project, which demuxes H264/AAC MKV on the fly and feeds the data into appendBytes():

http://code.google.com/p/mkvloader/

I'm also working on an MPEG-TS demuxer for loading HTTP Live manifests and playing them in Flash. Perhaps that'll bring some new insights. Will keep you updated. 

Kind regards,

Jeroen



More information about the foms mailing list