[foms] Proposal: adaptive streaming using open codecs
Conrad Parker
conrad at metadecks.org
Mon Oct 18 17:23:35 PDT 2010
On 18 October 2010 20:58, Jeroen Wijering <jeroen at longtailvideo.com> wrote:
> Hello all,
>
> Here is a (rough and incomplete) proposal for doing adaptive streaming using open video formats. WebM is used as an example, but all points should apply to Ogg as well. Key components are:
>
> * Videos are served as separate, small chunks.
> * Accompanying manifest files provide metadata.
> * The user-agent parses manifests and switches between stream levels.
> * An API provides QOS metrics and enables custom switching logic.
>
> What do you think of this approach - and its rationale? Any technical issues (especially on the container side) or non-technical objections?
>
>
> Chunks
> ======
>
> Every chunk should be a valid video file (header, videotrack, audiotrack). Every chunk should also contain at least 1 keyframe (at the start). This implies every single chunk can be played back by itself.
What is the expected duration of a chunk?
For at least Vorbis and Theora tracks in Ogg, including the headers to
make a valid file would require a complete copy of the codebooks,
adding ~3-4kB of overhead per track to the start of each chunk. I
assume this also applies to Vorbis in WebM (but not VP8?).
We would also need to specify where in the available chunks global
information goes, such as Ogg Skeleton or Matroska Chapters, and how
(or if) to handle seek tables, cueing data etc.
It might defeat some of the point of adaptive streaming to have
repeated information at the start of each chunk. Perhaps it would be
cheaper to just specify that chunks are a sequence of video frames
(ie. a sequence of pages/clusters beginning with a keyframe)?
cheers,
Conrad.
More information about the foms
mailing list