[ogg-dev] Ogg/Kate preliminary documentation
silviapfeiffer1 at gmail.com
Mon Feb 11 05:21:02 PST 2008
On Feb 11, 2008 9:27 PM, ogg.k.ogg.k at googlemail.com <
ogg.k.ogg.k at googlemail.com> wrote:
> > Right. This was, in fact, one of the roles of "chaining" where you'd
> > mark such changed components with a chain boundary, at which such
> > things are explicitly allowed to change. The drawbacks are the
> > overhead of resending all the setup data for configurable codecs like
> > vorbis and theora, and the semantic conflict between 'chain boundary
> > flags an edit point' and 'chain boundary flags a program change' which
> This also means that having to chain a particular logical stream implies
> having to break and rechain all other multiplexed streams. For, say,
> Theora (just imagining there, I don't know if that'd actually be the
> it could mean having to reencode a keyframe on the fly for the first frame
> of the new chain, or go without video for whatever time is left before the
> next keyframe (I've got no real idea how much time typically elapses
> between keyframes, but I believe it is variable ?)
> > There are certainly arguments for doing it both ways, but from the
> > Annodex point of view it is nice to push as much of that onto the
> > mux/skeleton level as possible, for all the reasons Silvia described.
> > Do you have a counter illustration of where adding a new category
> > suddenly, on the fly is contra-compelling?
> No particular reason, just the fact that it constrains possible uses of
> codec, especially for on the fly generation.
> I could certainly make up an example where one streams a video of people
> in an office, and labels are placed near each person following them
> but this is just a possible use I just made up, not something I actually
> to be done.
> Not that the kate format currently supports well moving regions around in
> realtime anyway, but that's something I'm thinking about currently.
In actual fact, you do not have to fill your logical bitstream with data,
even if you prepare for it by sending a bos packet. So, when live streaming,
we would normally know beforehand what resources we are sending - which sets
up the different logical bitstreams. Then, as we get to the, we can put them
in the stream. I don't really see a problem with this approach.
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the ogg-dev