[foms] Proposal: adaptive streaming using open codecs
blizzard at mozilla.com
Mon Nov 1 15:18:35 PDT 2010
On 10/26/2010 11:41 AM, Mark Watson wrote:
> But I think you need to drive it based on what is happening on the network. Otherwise how do I know how many chunks to "append". If I append too many and network conditions change, then I could stall. If I append too few then again I could stall.
If you want the best platform for this kind of experimentation what you
can do is just hook up getting the data out of a websockets connection
and dump it to the video element somehow. You can then know how much
data you're getting, control what the next packet will be and it's
entirely bi-directional so you can communicate over the same socket that
you're using to get data.
Note that this has very different characteristics compared to something
HTTP based, with its associated upsides and downsides. But it's a good
platform for learning.
> It would be really great if the whole thing could run independently for audio and video. They can be completely decoupled for streaming and synchronized at the renderer.
This is probably easy to do with something websocket-based as long as
you can keep the decoders full and you're getting packets that are in sync.
More information about the foms