[foms] W3C activities on HTTP adaptive streaming - Inform and ask opinion.

Silvia Pfeiffer silviapfeiffer1 at gmail.com
Wed Dec 1 19:25:58 PST 2010


Hi HJ,

are you saying that the W3C is considering starting a WG for adaptive
HTTP streaming? If that is the case, I would very much think that the
discussion and conclusions that we have come up with here would be
well placed as input into that WG. I believe Jeroen, who has very much
taken the lead in pulling all the information together here, may even
have some very good draft proposals as starting points.

While the idea here was to put a recommendation forward into the
WHATWG, I don't see any reason why that input couldn't also be
provided into a new W3C WG. WHATWG and W3C together have traditionally
figured out the changes to HTML5, so it wouldn't be different here,
IMO.

Cheers,
Silvia.

On Thu, Dec 2, 2010 at 12:51 PM, 이현재 <hj08.lee at lge.com> wrote:
> Dear experts,
>
> I've been watching this thread since it started. I think the discussion
> here is very relevant for possible HTML5 video tag extension. In W3C Web
> and TV Interest group which I am chairing is trying to formulate official
> WG for adaptive streaming. Because TV is very resource scarce unlike PC or
> smart phone which are memory and CPU abundant, single solution for adaptive
> streaming is very necessary. I would like to link this activities with
> possible adaptive streaming WG. Can I hear your opinion on my plan?
>
> I discussed Paul Cotton and other W3C staffs that adaptive streaming schema
> link source element will be easily put under video element like form
> below(the exact syntax may change)
> <video>
>  <source src="movie.xml" type='video/manifest />
>  <source src="movie.mp4" type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"'
> />
>  <source src="movie.webm" type='video/webm; codecs="vp8, vorbis"' />
> </video>
> The browser behavior will be like this:
> It will decide if it can support adaptive streaming, it will fetch the
> manifest file and follow the direction inside the file, if it cannot, it
> will fall through below links to find supporting codecs with normal http
> connection.
>
> What the WG will do is to find immediate need use case and define the
> manifest file format for it.
>
> Best regards,
> HJ
>
>
>
>
>
>
>
>
> -----Original Message-----
> From: foms-bounces at lists.annodex.net [mailto:foms-
> bounces at lists.annodex.net] On Behalf Of foms-request at lists.annodex.net
> Sent: Thursday, December 02, 2010 5:00 AM
> To: foms at lists.annodex.net
> Subject: foms Digest, Vol 50, Issue 1
>
> Send foms mailing list submissions to
>        foms at lists.annodex.net
>
> To subscribe or unsubscribe via the World Wide Web, visit
>        http://lists.annodex.net/cgi-bin/mailman/listinfo/foms
> or, via email, send a message with subject or body 'help' to
>        foms-request at lists.annodex.net
>
> You can reach the person managing the list at
>        foms-owner at lists.annodex.net
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of foms digest..."
>
>
> Today's Topics:
>
>   1. Re: What data is needed for adaptive stream switching?
>      (Mark Watson)
>   2. MPEG documents (Mark Watson)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 30 Nov 2010 14:47:03 -0800
> From: Mark Watson <watsonm at netflix.com>
> Subject: Re: [foms] What data is needed for adaptive stream switching?
> To: Foundations of Open Media Software <foms at lists.annodex.net>
> Message-ID: <611B403B-8DEF-4930-9402-5D14BAA080CF at netflix.com>
> Content-Type: text/plain; charset="us-ascii"
>
>
> On Nov 29, 2010, at 8:05 AM, Frank Galligan wrote:
>
>
>
> On Tue, Nov 23, 2010 at 4:24 PM, Chris Pearce
> <chris at pearce.org.nz<mailto:chris at pearce.org.nz>> wrote:
> Thanks for explaining Mark. Much appreciated.
>
> On 24/11/2010 6:51 a.m., Mark Watson wrote:
>> This is where there is scope for experimentation. What I think would be
> great is to define an API which can indicate these decision points, provide
> the two data sets (past incoming bandwidth) and (future bitrate of each
> stream) at some sufficient level of generality and indicate back the
> decision. Then we can experiment with more and less complex input data and
> more and less complex decision algorithms.
>
> So in terms of what changes to browsers we'd need to make to start
> experimenting, we'd need to resurrect the @bufferedBytes attribute, add
> a @currentOffset attribute, and add some way for JS to access the
> RAP/keyframe index?
> I'm not sure what exposing the RAP to the JS for each stream buys you over
> just having a simple switch stream api. Let me back up a bit.
>
> At a high level I think we want the media pipeline to expose information
> about the current presentation to the player through a script interface.
> Then the player can make decisions about what stream the media pipeline
> should be rendering. The decisions made by the player are it's best guess
> so the media pipeline will not stall for any reason. We want to do this
> because trying to define an algorithm that all default players must
> implement is going to be extremely hard. This is my view from reading over
> the posts on this list.
>
> So to me I don't think that a JS player really cares that the media
> pipeline switches to stream N at byte offset X (of course the media
> pipeline does). All the player cares about is that the media pipeline
> switch to stream N as soon as possible. I.E. The player decides that the
> CPU load is too great and wants to switch to a lower resolution as soon as
> possible seamlessly. Or the player decides that the bandwidth looks high
> enough that it would like to switch to a higher bandwidth stream as soon as
> possible seamlessly.
>
> I think having an API that is SwitchTo(index, discontinuous); should be
> good enough for now. (I added a discontinuous parameter in case the player
> wanted to switch and didn't care about it being seamless.) Are there any
> reasons we need to expose RAPs to the player?
>
> The problem if you make the information exposed to the JS layer too simple
> is that you end up with essentially only one adaptation algorithm (compare
> incoming bandwidth to stream rates) and then there is no scope for
> experimentation.
>
> On the other hand, I agree that it could easily be made too complex.
>
> So the question is how "precise" do we expect the adaptation algorithms at
> the JS level to be ? Exposing the switch points is of interest if you
> expect the JS algorithm to be making quite precise decisions based on the
> current amount of received data, remaining distance to the next switch
> point and precise VBR profile of the streams going forward.
>
>
> I can think of a corner case, but I don't think the added complexity to the
> interface and to the player developer is worth it. The case I'm thinking of
> is the player wants to switch to a lower bandwidth stream but stream N
> which the player wants to switch to doesn't have a RAP for 10 seconds while
> stream N-1, which has bandwidth < stream N, has a RAP 2 seconds out. If
> people felt strongly about handling this case we could expose an API like
> SwitchDown(attribute); Then there would also be an API call that can
> control the heuristics if the media pipeline had to switch to another
> stream with a lower value attribute than the desired stream.
>
> I'm not sure it's a corner case. Even if I have switch points every 2s if
> my bandwidth drops by 50% then it could be as much as 4s to receive enough
> data to get to the next switch point. I probably need to account for this
> and I may choose an even lower stream rate than otherwise to avoid a stall.
>
>
> Maybe we should add the @bufferedBytes data into
> @buffered, so you can easily map buffered time ranges to byte ranges? I
> guess these would have to be per-stream if we're playing multiple
> independent streams.
>
> Or would you prefer an explicit download bandwidth and a per stream
> bitrate measure, calculated by the browser, over a specified time window?
>
> In terms of an API which can indicate decision points, maybe an event
> which fires when playback enters a new chunk? Or fires when the download
> of a chunk finishes? Be aware we can't guarantee that any DOM events
> which we fire will arrive in a particularly timely manner, there could
> be any number of other things going on with the event queue.
> This is another reason that I think a higher level API for the player is
> better.
>
>
> Is there a direct mapping between a keyframe index and RAP points? What
> about audio streams in containers which don't index audio? Particularly
> in the case where we're playing multiple independent streams.
> For WebM I wrote a tool that would create an index file for an audio stream.
>
> Frank
>
>
>
> Chris P.
>
> _______________________________________________
> foms mailing list
> foms at lists.annodex.net<mailto:foms at lists.annodex.net>
> http://lists.annodex.net/cgi-bin/mailman/listinfo/foms
>
> <ATT00001..txt>
>
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL: http://lists.annodex.net/cgi-
> bin/mailman/private/foms/attachments/20101130/3bf1da63/attachment.html
>
> ------------------------------
>
> Message: 2
> Date: Wed, 1 Dec 2010 11:47:48 -0800
> From: Mark Watson <watsonm at netflix.com>
> Subject: [foms] MPEG documents
> To: Foundations of Open Media Software <foms at lists.annodex.net>
> Message-ID: <084314F9-644C-4478-A126-29BD260A4FE2 at netflix.com>
> Content-Type: text/plain; charset="us-ascii"
>
> All,
>
> It turns out the MPEG documents in the approval process are on an open
> website - I just discovered this.
>
> The DASH Committee Draft is at
> http://www.itscj.ipsj.or.jp/sc29/open/29view/29n11662t.doc
> The draft amendment to the ISO File Format is here:
> http://www.itscj.ipsj.or.jp/sc29/open/29view/29n11682t.doc
>
> The latter contains some new boxes proposed to be used with DASH, for
> example the Segment Index Box and Track Fragment Decode Time box.
>
> Please be careful with these - although the documents passed some
> milestones last month, the word DRAFT still has real meaning here. In
> particular there will definitely be minor changes to the Segment Index Box.
> I'm happy to advise anyone who is interested, though, on the latest status.
> Or if anyone has comments on this work I'm in a position to feed those back
> and influence the process.
>
> It's obvious as soon as you open this document that it contains a lot of
> stuff which is not necessary for initial deployments. I expect there to be
> a process to "profile" this down to a sensible subset.
>
> I understand in this forum there may be a preference for a simpler, bottom
> up, approach (as discussed in some other threads). What I think is most
> important is to agree first the abstract data model and supported features -
> then it can be mapped to whatever is your favorite manifest and file format.
>
> ...Mark
>
>
>
>
> ------------------------------
>
> _______________________________________________
> foms mailing list
> foms at lists.annodex.net
> http://lists.annodex.net/cgi-bin/mailman/listinfo/foms
>
>
> End of foms Digest, Vol 50, Issue 1
> ***********************************
>
> _______________________________________________
> foms mailing list
> foms at lists.annodex.net
> http://lists.annodex.net/cgi-bin/mailman/listinfo/foms
>


More information about the foms mailing list