[theora] Indexing Ogg files for faster seeking

Bernard Jungen bjung50169 at euphonynet.be
Thu Jan 21 20:38:33 PST 2010


On Thu, Jan 21, 2010 at 08:43:45PM -0500, Gregory Maxwell wrote:
> On Thu, Jan 21, 2010 at 7:50 PM, Bernard Jungen
> <bjung50169 at euphonynet.be> wrote:
> > On Fri, Jan 22, 2010 at 12:46:46PM +1300, Chris Pearce wrote:
> >> I previously tried compressing the indexes only with zlib (i.e. not
> >> delta-then-variable-byte-encoding them before zlib deflating them), and
> >> that got us about 50% compression.
> >
> > Have you tried delta+deflate, i.e. without variable encoding? And also
> > with each kind of data grouped together?
> 
> Deflate isn't really going to be all that effective for this kind of
> data, no matter how we slice it.

One never knows for sure without trying. Simple reordering and transformation
of data before pushing it into a general-purpose compressor can make all the
difference.

In the case of deflate, which is a mix of dictionary and entropy coding,
reordering data should theoretically improve efficiency.

> Perhaps one of us has a really simple adaptive entropy coder that
> could be used. If only we knew people who did compression!

In the end entropy compression efficiency will depend on the randomness of
deltas.

If the deltas are reasonably random, which may well be the case for keyframe
offset deltas, we can simply pack them as n-bit numbers where n is minimal
for the given series.

Entropy coding *may* be more efficient for time deltas, which *may* have
significant redundancy, and IMO deflate is a good easy test for this. A static
coder may also be sufficient (and simpler than adaptive) for our data. There's
compression code at http://www.cbloom.com/

Cheers,

Bernard.
-- 
http://home.euphonynet.be/bjung/
GPG: D3ED A92F D243 FC07 1881 BE2E E68A A45D A54A DA90


More information about the theora mailing list