[foms] A relevant comment from my blog on: "adaptive HTTP streaming for open codecs"
silviapfeiffer1 at gmail.com
Tue Oct 26 15:11:15 PDT 2010
A new comment on the post "adaptive HTTP streaming for open codecs" is
relevant to our current discussion
I'm contacting the author to see if he wants to get involved in this forum.
Author : David R
I have been involved in the development of adaptive video streaming
(AS) on a wide range of platforms including PC/MAC, CE devices, and
even BD-Live. I have a few thoughts:
1) Virtual chunking is the best implementation model. Chunked files
(also known as the "billion file" model) has scalability and asset
management issues. There are CDN services that use a single file and
serve chunks to clients, but this presents a cost/scalability issue.
An HTTP range request on a single file can be supported by any CDN,
and is the cheapest and most scalable way to deliver video bytes. In
addition, Virtual chunking allows smaller chunks. Chunk size
correlates to start-up time, and re-buffer rates.
2) Alignment of video across chunks is important to provide simple
seamless bitrate switches, and should be a requirement. My rule is to
never impose complexity costs on the client (or servers, CDN's, etc.),
that can be easily handled once at the encode step. For a given video
source, providing n bitrate encodes that are properly aligned is a
fairly simple and highly scalable task. For a popular streaming
application that we delivered recently, we encoded > 30,000 video
sources (average view time of 1 hour) at multiple bitrates in < 60
In the original blog, a disadvantage for virtual chunking is “Multiple
decoding pipelines need to be maintained and byte ranges managed for
each”. If each video stream has aligned chunks, and has a sequence
start per chunk, then only one decoding pipeline is needed.
3) It is best not to switch audio except at rebuffers or seeks.
4) Given #3, demuxed A/V streams are preferred. This also allows for
alternate audio tracks.
5) IMO 'HTTP Live Streaming' (which uses chunked muxed AV in
M2TS container), while useful for live content, is very troublesome
for streaming of content libraries (see 1-4 above).
Live streaming and streaming of library content have very different
requirements, and if the objective is to have a great platform for
streaming movies, then the design decisions should be different than
if the objective is an adaptive video conferencing, or live broadcast
More information about the foms