[foms] WebM Manifest
Silvia Pfeiffer
silviapfeiffer1 at gmail.com
Sat May 7 01:54:01 PDT 2011
On Sat, May 7, 2011 at 6:40 PM, Pierre-Yves KEREMBELLEC
<pierre-yves.kerembellec at dailymotion.com> wrote:
>>> Exactly. I don't know about any HTTP cache that deals properly with byte-ranges and
>>> partial caching (using for instance hollow files + bitmaps, like Thomas described).
>>> (this problem is now new, see http://bit.ly/ixdQwo for instance). As pointed by Thomas,
>>> Varnish may be able to achieve partial caching through the http_range_support directive,
>>> (since 2.1.3), but it has to be proven stable.
>>> Unfortunately, this type of caches is more the exception than the norm today.
>
>> At Netflix we make extensive use of byte ranges (allegedly 20% of US Internet traffic at peak times). This is well supported by the major CDNs who all support byte ranges and partial caching of large files.
>
> Well, maybe major CDNs supports byte-range caching properly (and even that seems to be handled specifically
> by some CDN, see http://www.akamai.com/dl/feature_sheets/fs_lfdo.pdf for instance). Anyway, this is definitely
> not the case for most ISPs (transparent proxies) or enterprises today (we are reminded of that fact everyday
> unfortunately). Again, efficient byte-ranges caching is more the exception than the norm globally (Microsoft
> even recently filed a patent for that: http://www.faqs.org/patents/app/20100318632 ^_^).
>
>> Lack of byte range support is not the reason chunking is used for live (more on that below). I absolutely agree
>> that solutions need to work with "dumb" HTTP infrastructure and for me this excludes special media-format-specific
>> capabilities on the origin servers more than it excludes byte ranges which are part of HTTP1.1.
>
> I agree to disagree here: the first origin server may implement some dynamic chunk/fragmentation intelligence because
> it's under the content provider control, and generally backed-up by a first level of CDN proxies. It doesn't break the
> "dumb public internet network" rule (from any perspective but the origin's, the chunks are just simple separate documents
> with unique URL).
I'm trying to understand how you see this working. Are you saying that
on the first origin server there are only non-chunked, complete files,
but there is a server plugin that creates chunks on the fly for
pre-defined URLs as per the manifest that is given to UAs. However,
non-first origin servers distribute chunked versions only since that
is what the general internet can work with. Finally, servers in CDNs
would also have the server plugin implemented and run on non-chunked
files to provide the chunking on the fly. Is that how it's supposed to
work?
Silvia.
More information about the foms
mailing list