[foms] WebM Manifest

Thomas Vander Stichele thomas at apestaart.org
Thu May 5 02:18:05 PDT 2011


Hi everyone,

(Pierre-Yves, you summarized my point way better than I did, thanks)

> > - no byte-ranging (or at least normalized byte-ranges between all vendors, which
> >  probably is NOT the case with remote probing)
> 
> 
> What is the problem with byte-ranging?

caching servers deployed today simply don't do byte-range caching well
or at all.  We all know that it *should* be possible to create a large
file of zeroes, and fill it in with received byte ranges, and tracking
which ranges you've already seen.  But very few caching servers do.  I
think squid only does it starting from 0.  I think varnish can do it.
Varnish is definitely not wide deployed in CDN's today however.  The
reality today is that byte range requests are not properly cached, and
there is no pressing need to either.  Requiring it for WebM adaptive is
going to hurt WebM more than CDN's.

> > I'm sure you've seen the recent NetFlix vs Comcast, Orange vs Cogent/MegaUpload,
> > or Google vs French ISPs bs in the news recently, this is exactly what I'm talking
> > about: let's try to define a scheme that would maximize "natural caching" within
> > a "dumb" HTTP-caching-aware network, with "streaming intelligence" happening on
> > end-user player and origin server sides only.
> 
> I agree that we should not rely on any intelligence in the network.
> 
> However, we also cannot expect intelligent servers. We have to deal
> with what standard HTTP servers allow - at most we can assume byte
> range request support. So, in essence, all intelligence needs to be in
> the player. And for players to do "chunking", I cannot see a way
> around byte range requests. If you do, please share.

Because if I follow correctly you are not considering actually having
chunked files on the server, which is exactly how Microsoft/Adobe/Apple
do it for adaptive bandwidth.  For some reason this group is seeing
chunked files at the source as a huge problem.  Store chunked files
upstream and you don't need byte ranges at all.


I'm not saying it's the best way of doing things, I'm saying the market
has already decided. The problems you see with storing lots of small
chunks is already there, and what WebM chooses isn't going to solve that
problem for CDN's.  In fact, it's what CDN's do best, and what they've
asked from the big vendors.

We can choose to go the 'this is the best solution according to us' way,
which will take us a few years, and probably won't see any market takeup
at all, or we can go the pragmatic 'this approach is guaranteed to work
on the internet because it aligns with all the other technologies'.

There are some interesting problems that we want to solve that
completely can be solved within a chunked approach - audio track
switching, codec switching, ...

But today, *starting* to design an approach to multibitrate for a
minority codec in a market place that already has the infrastructure
rolled out *today* for doing multibitrate for all other codecs seems
crazy to me.  At least in the last wave of streaming servers ten years
ago Icecast and Vorbis were starting from a more level playing field
where all streaming servers still needed to be integrated in CDN's.

Thomas



> 
> Cheers,
> Silvia.
> _______________________________________________
> foms mailing list
> foms at lists.annodex.net
> http://lists.annodex.net/cgi-bin/mailman/listinfo/foms

-- 

-- 
Be everything to me tonight
--
URGent, best radio on the net - 24/7 !
http://urgent.fm/




More information about the foms mailing list