[Theora-dev] Re: [ogg-dev] OggYUV
John Koleszar
jkoleszar at on2.com
Tue Nov 8 12:33:57 PST 2005
Timothy B. Terriberry wrote:
>Chapter 4 of the Theora specification does a reasonable job of laying
>out all of the possible parameters for a Y'CbCr-style color space, which
>includes as a subset those needed for RGB. Much more detailed
>information is available from Charles Poynton's Color and Gamma FAQs:
>http://www.poynton.com/Poynton-color.html
>If you wish to do any serious video work, you should at a minimum
>undestand these.
>
>
In terms of colorspaces, it seems to me that the only way to completely
describe the colorspace is to provide the transform matricies to or from
some reference colorspace. Is this a valid statement?
>For a lossless codec, the luxury of a "small number of useful formats"
>may not be advisable. I can't tell you how many times I've had some raw
>data and been completely unable to play it with e.g., mplayer, because
>mplayer did not have an apporpriate fourcc. And mplayer has made up many
>of their own non-standard fourcc's (which not even all of mplayer
>support) to cover for the gaping holes left after counting illi's
>supposed "90% of cases on one hand". It is a common but deadly mistake
>to assume that what is important to you is what is important to everyone
>else. Creating a video format system around the fourcc model has always
>struck me as a very, very bad idea.
>
>
Perhaps the answer is a hybrid then.. Come up with a structure
containing all the metadata necessary to identify an image's colorspace,
sampling parameters, and storage method. Use fourcc or some other
enumeration as a key to a table that contains default values for all
these parameters. If you don't specify an enumerated type, the specified
values could be used. Somewhere someone's going to write down all the
values to fill in for the standard fourcc's anyway, might as well make
it centralized. It's much more pragmatic, since fourcc describes a lot
of data out there already, and most of the more "obscure" metadata has
been lost and would have to be invented to fill out this new structure
entirely. Better to keep invented data in common, IMHO..
>You'll also note the specification says nothing about packed formats.
>Packed vs. planar is completely orthogonal to the rest of the issues,
>and only arises when storing the raw pixel data directly. Supporting
>either is relatively easy in software with an xstride/ystride for each
>component, so there is hardly a reason not to (Theora doesn't because it
>greatly simplifies several inner loops to be able to assume xstride=1; a
>raw codec should not be as affected by such an issue). And there is
>definitely a reason _to_ support them in a raw format, since switching
>from one to the other is relatively difficult for hardware.
>
>
Agreed.. Though it's worth pointing out that it's possible to have
images wthere the xstride/ystride between components is not constant
(endian issues, UVYY packings, etc). How to handle interlacing is
another problem, if you're trying to make a super generic format. A line
has to be drawn somewhere, and it's hard to say where that is.
More information about the ogg-dev
mailing list