[theora-dev] Theora integration question

Engineering ee at athyriogames.com
Wed Oct 17 17:55:19 PDT 2012


Thanks Timothy (and Ralph and Benjamin)

This should fix my issues.

In the short term, the ref_frame_data and dct_tokens tricks saves me about
1GB of RAM, and I can ship my product on time.

In the long term, I'll change the system to intelligently share decoders
when possible

Thanks again for putting up with my odd questions!
Sam

> -----Original Message-----
> From: theora-dev-bounces at xiph.org [mailto:theora-dev-bounces at xiph.org]
> On Behalf Of Timothy B. Terriberry
> Sent: Tuesday, October 16, 2012 10:33 PM
> To: theora-dev at xiph.org
> Subject: Re: [theora-dev] Theora integration question
> 
> Engineering wrote:
> > I already have a mechanism in place to share video memory amongst
> > movies that are mutually exclusive. Is there anything to watch out
> for
> > by sharing some of theora's internal RAM buffers between movies that
> > will never be decoded simultaneously?
> >
> > I have test code into so that the largest (1920x1080) movies share
> > buffers for ref_frame_data and dct_tokens
> >
> > As long as I remember to update the movie at frame 0 before use, I
> > don't see any issues, but am I inviting disaster? Doing this gets me
> > from 1.2GB to 0.2GB of RAM usage.
> 
> This will work fine. The encoder guarantees the first frame is a
> keyframe, so as long as you start playback from frame 0 each time you
> switch movies, ref_frame_data will get filled in correctly (which you
> can verify as Ralph explained).
> 
> > A more elegant way might be to concatenate all the full screen movies
> > into one ogv, and then playback which parts I need, i.e. frames
> > 180-360. My worry there is inter/intra frames. These are a collection
> > of short, disparate clips. Is there an existing way to concatenate
> > ogvs and guarantee each parts starts on an intra-frame?
> 
> If the movies are all the same format (i.e., same frame dimensions,
> same Huffman codebooks, and same quantization matrices), you can simply
> feed the frames from any of the streams into a single decoder, without
> bothering to make a new one using the header packets for that stream.
> The easiest way to verify that this will work is to do a byte-by-byte
> comparison of the first and third header packets. If they're the same
> for two streams, then a single decoder can decode either one. This will
> generally be the case if they were encoded with the same encoder
> version.
> 
> This avoids the need to actually concatenate the videos. If you wish
> to, anyway, oggCat from from the oggvideotools package can do so (again
> as long as the first and third header packets match). That means you
> can encode each one separately (thus ensuring it starts with a
> keyframe) and join them into a single file, like you were asking.
> 
> If you do go the route of just passing in frames from different videos
> to the same decoder, you may need to be careful of the internal
> timestamp calculations: libtheora uses the ones provided in the
> ogg_packet buffer you pass in if it's available, but will extrapolate
> from the most recently seen timestamp otherwise (since Ogg does not, in
> general, include a timestamp on every packet). Since you're generating
> your own ogg_packet buffers, the simplest thing is to just always
> provide a timestamp. However, libtheora actually does _nothing_ with
> the timestamps other than use them to pass back a valid _granpos
> parameter from th_decode_packetin() for every packet. If you're not
> using the value that gets passed back there, you can ignore everything
> I just said completely.
> _______________________________________________
> theora-dev mailing list
> theora-dev at xiph.org
> http://lists.xiph.org/mailman/listinfo/theora-dev




More information about the theora-dev mailing list