[vorbis-dev] ogg pic format (again).. here's why

Lourens Veen jsr at dds.nl
Fri Jan 26 14:13:54 PST 2001

Gerry wrote:
> I sent a little mail some time ago asking if there was going to be an ogg
> pic-format, and you replied that PNG, MNG and JNG is good enough (sorry for
> this late answer btw).. But, consider this: The ogg video-format (tarkin ?
> where do you get these names from anyway ? :) ) needs a way to compress its
> frames. Are you going to use MNG for that ? :) ..

Well, have you ever tried saving a movie in MNG format? I doubt it'd
compress very well compared to for example MPEG. I don't know the
details of MNG, but if it's lossless like PNG then compression won't be
good. And there's no need for it to be lossless since our brains only
need a small amount of the information present in the stream. Thing is,
if you store a bunch of images you're not exploiting inter-frame
coherency, which is a real compression-killer.

> If you had an ogg pic format, that format could not only be used for normal
> pictures, but also for animations and movies! Both pictures, animations and
> "real" movies (should) have this in common: compress it to as good quality
> and small size as possible. Why not make all in one ? This format could also
> support some cool stuff that PNG doesn't, like layers. Actually, layers
> could simply be a special kind of animation, where all the frames are put on
> top of each other at once. Layered animations would then be animated
> animations :) . In animations, layers could even be shared for different
> frames! For movies this layer function could be used for fx. translation of
> text in the picture (normal subtitling should of course be text with
> timecodes).

Well, it's all the same really. A movie is a 3D block of information, so
is a layered image. A layered movie is 4 dimensional. The difference is
just a couple of flags to specify what is what, and possibly
optimisations in the encoder.

The currently available early experimental tarkin sources use an
n-dimensional wavelet transform (currently 3D only, but the code is
generic) and vector quantisation on the wavelet coefficients, which
works well. I'm attempting to do cross and inter-frame fractal
compression, using arbitrary triangles and edge detection and matching
logic to speed up the process. So far it's not working, and I need to
study so I don't have much time to hack. But I should be able to work on
it again in a week or two, so I'll have another go at it then. Still,
video coding is a very complex thing, and I have a feeling that existing
theories and algorithms are nothing compared to what's possible.
> So, what do you think ? :)

I think that depending on the algorithms used to compress video it may
still be useful to have a single image compression. On the other hand,
if the images are all different then they will be detected by the
encoder as all base-frames and encoded more or less independently


--- >8 ----
List archives:  http://www.xiph.org/archives/
Ogg project homepage: http://www.xiph.org/ogg/
To unsubscribe from this list, send a message to 'vorbis-dev-request at xiph.org'
containing only the word 'unsubscribe' in the body.  No subject is needed.
Unsubscribe messages sent to the list will be ignored/filtered.

More information about the Vorbis-dev mailing list