[theora-dev] Theora integration question

Timothy B. Terriberry tterribe at xiph.org
Tue Oct 16 20:32:40 PDT 2012


Engineering wrote:
> I already have a mechanism in place to share video memory amongst movies
> that are mutually exclusive. Is there anything to watch out for by sharing
> some of theora's internal RAM buffers between movies that will never be
> decoded simultaneously?
>
> I have test code into so that the largest (1920x1080) movies share buffers
> for ref_frame_data and dct_tokens
>
> As long as I remember to update the movie at frame 0 before use, I don't see
> any issues, but am I inviting disaster? Doing this gets me from 1.2GB to
> 0.2GB of RAM usage.

This will work fine. The encoder guarantees the first frame is a 
keyframe, so as long as you start playback from frame 0 each time you 
switch movies, ref_frame_data will get filled in correctly (which you 
can verify as Ralph explained).

> A more elegant way might be to concatenate all the full screen movies into
> one ogv, and then playback which parts I need, i.e. frames 180-360. My worry
> there is inter/intra frames. These are a collection of short, disparate
> clips. Is there an existing way to concatenate ogvs and guarantee each parts
> starts on an intra-frame?

If the movies are all the same format (i.e., same frame dimensions, same 
Huffman codebooks, and same quantization matrices), you can simply feed 
the frames from any of the streams into a single decoder, without 
bothering to make a new one using the header packets for that stream. 
The easiest way to verify that this will work is to do a byte-by-byte 
comparison of the first and third header packets. If they're the same 
for two streams, then a single decoder can decode either one. This will 
generally be the case if they were encoded with the same encoder version.

This avoids the need to actually concatenate the videos. If you wish to, 
anyway, oggCat from from the oggvideotools package can do so (again as 
long as the first and third header packets match). That means you can 
encode each one separately (thus ensuring it starts with a keyframe) and 
join them into a single file, like you were asking.

If you do go the route of just passing in frames from different videos 
to the same decoder, you may need to be careful of the internal 
timestamp calculations: libtheora uses the ones provided in the 
ogg_packet buffer you pass in if it's available, but will extrapolate 
from the most recently seen timestamp otherwise (since Ogg does not, in 
general, include a timestamp on every packet). Since you're generating 
your own ogg_packet buffers, the simplest thing is to just always 
provide a timestamp. However, libtheora actually does _nothing_ with the 
timestamps other than use them to pass back a valid _granpos parameter 
from th_decode_packetin() for every packet. If you're not using the 
value that gets passed back there, you can ignore everything I just said 
completely.


More information about the theora-dev mailing list