[foms] WebM Manifest

Pierre-Yves KEREMBELLEC pierre-yves.kerembellec at dailymotion.com
Thu May 5 01:51:30 PDT 2011


Le 5 mai 2011 à 00:17, Silvia Pfeiffer a écrit :
> On Thu, May 5, 2011 at 2:07 AM, Ralph Giles <giles at thaumas.net> wrote:
>> On 4 May 2011 06:40, Thomas Vander Stichele <thomas at apestaart.org> wrote:
>> 
>>> The reality today is that there isn't a single CDN out there (except
>>> maybe ours, but we're not even in the US) that is able to stream live
>>> non-adaptive WebM.  Not even Google seems to plan to do it.
>> 
>> Hi Thomas! Thanks for joining in.
>> 
>> I'm confused about what your point is here, other than we should be
>> considering live streaming. You also seem to be saying that chunked
>> streaming is the only way that's going to work...so there's no point
>> in other methods, e.g. virtual chunking for switching bandwidth with
>> not-live streams?
> 
> I think Thomas is referring to the proposal to simply concatenate the
> different bandwidth alternatives into one file to make access and
> loading faster. I don't think Thomas is suggesting that to necessarily
> mean that the files of one bandwidth need to be chunked.
> 
> I actually agree with Thomas - I think we need to keep the different
> bandwidth alternatives as different files. But I would also say that
> if we can avoid chunking within a single bandwidth and can find means
> of switching mid-stream, that would be nicer.
> 
> Right now, the chunking on MPEG is used to allow for live streaming,
> too, because it updates the "file size" regularly. But I don't think
> that's absolutely necessary to enable live streaming over HTTP.

I think the most important thing in Thomas's email is the "caching" part.
All the technics from different vendors that he mentioned (Aodbe, Microsoft,
Apple) are using chunked delivery (from the client perspective, whether or
not the original content is stored within a single file or not) for that
very reason: it's much more easier to cache multiple small chunked resources
than giant video files (a few 100KB/MB per chunk vs several 10MB per content).
It's particularly important for long video footages or live.

To maximise cache efficiency and layered content sharing between users _today_,
this implies a "clean" and normalized URL scheme to access the different chunks
in order (or randomly, in case of seeking), namely:

- no query-string parameter (most cache softwares consider this dynamic resources)

- no byte-ranging (or at least normalized byte-ranges between all vendors, which
  probably is NOT the case with remote probing)

- no funny "per-session" HTTP headers that would let caches think the resource
  is not cache-able, even in the presence of Vary: header

- <insert HTTP-cache-unfriendly-behavior here>

For instance, I think what Microsoft did with the SmoothStreaming URL scheme is
quite clever.

As a matter of fact, leveraging the different caches layers available _today_ in
the internet (CDNs, ISP transparent caches, browser caches, ...) is probably the
best way for the whole thing not to "collapse" (from a financial pov, I'm not
advocating internet doomsday here ^_^). HTTP chunked delivery definitely seems
the best way to achieve that.

I'm sure you've seen the recent NetFlix vs Comcast, Orange vs Cogent/MegaUpload,
or Google vs French ISPs bs in the news recently, this is exactly what I'm talking
about: let's try to define a scheme that would maximize "natural caching" within
a "dumb" HTTP-caching-aware network, with "streaming intelligence" happening on
end-user player and origin server sides only.

Regards,
Pierre-Yves



More information about the foms mailing list