[foms] Proposal: adaptive streaming using open codecs
watsonm at netflix.com
Mon Nov 15 22:57:14 PST 2010
On Nov 15, 2010, at 8:07 PM, Silvia Pfeiffer wrote:
> On Tue, Nov 16, 2010 at 1:58 PM, Mark Watson <watsonm at netflix.com> wrote:
>> On Nov 15, 2010, at 4:37 PM, Chris Pearce wrote:
>>> On 16/11/2010 12:19 p.m., Silvia Pfeiffer wrote:
>>>> On Tue, Nov 16, 2010 at 10:13 AM, Chris Pearce<chris at pearce.org.nz> wrote:
>>>>> The earlier consensus from most of the content providers was the non
>>>>> interleaved was easier to manage, particularly at large scale when you
>>>>> have a number of different bitrate streams, and a number of different
>>>>> audio tracks.
>>>> We have to be careful where we take that statement. Just because the
>>>> large content owners don't want to do physical chunks and want to keep
>>>> audio and video tracks separate doesn't mean we have to do that over
>>>> the network or use chunks in the manifest file.
>>> There's not much point in designing a technology which the content
>>> providers won't want to use.
>>>> There is always the
>>>> possibility to have something different on disk than what is being
>>>> sent over the network.
>>> That requires custom servers, which isn't ideal. Anything we do should
>>> work with current infrastructure if possible.
>>>> For large content providers use of such server
>>>> extensions makes a lot of sense.
>> Not really - there is tremendous advantage even for large content providers to be able to use the same infrastructure to scale the service as is already deployed for the web - namely standard web servers and caches as already deployed in the CDNs. Restricting server extensions to the origin servers is an option, but then you lose cache efficiency, since when the server prepares the same data in two different ways for two users it gets cached twice.
> Maybe then I misunderstood an earlier discussion. I was under the
> impression that it is not possible for a large content provider to
> prepare millions of small files on the server to do Apple's HTTP live
> streaming - so instead there is a server extension that provides the
> chunking functionality on the fly. Is that not correct?
That might be what some advocate, but what I would advocate is having just one file for each bitrate of video and a separate one for each bitrate or language of audio etc. and then provide the clients with an index into each file so they can make byte range requests for the pieces they need from each.
There does exist, in several CDNs, a simple server extension which enables a byte range to be embedded in a URL, instead of in the Range header, and we do use this with Apple clients for our service to avoid the "millions of files" problem. But this is just a different way of communicating the byte range to the server which happened already to exist and be useful as a workaround and which is very much an application-independent capability: what I would suggest we avoid is any video-specific server extensions, where servers are expected to understand the format of the video and audio files, re-multiplex them etc.
> foms mailing list
> foms at lists.annodex.net
More information about the foms