[theora-dev] YUV question

Ralph Giles giles
Fri Jun 4 10:30:47 PDT 2004


> The statement I care to make here, is simply that there ain't neither
> such thing as "luminance", which details "the eye is more sensitive to",
> nor "chromaticity", which details the eye is less sensitive to. What we
> have here is simply packing 8-bit R, G and B values together in 8-bit Y,
> which actually downsamples them to 7..5 bits, as it takes only 77,
> 151 and 30 unique values for R, G and B, respectively; than two additional
> 8-bit values are calculated for values that have suffered downsampling
> the most (namely, B and R) - it is obvious that they are necessary to
> recover individual R, G and B values from Y. Remarkably, those are these
> values, U and V, which hold color "details" (least significant bits of B
> and R, respectively, mixed with same data that we already have in Y) -
> and that is exactly the reason why they can be safely "sacrificed" in
> YUV 4:x:x "color spaces". As for why it is like that, things have been
> said: "YUV color is used in... TV broadcasts... Only the Y component of a
> color TV signal is shown on black-and-white TVs". Besides TV standards,
> nothing holds you fron using something like Y = R/4 + G/2 + B/4, which
> would considerably speed up codec, or sticking with good old 12-bit RGB.

Err, well. This is partly a compatibility issue. Black and white television
used a single intensity signal to represent the image. When developing colour
television, they couldn't just send three separate R,G, and B signal, because
that would have looked strange on black and white TVs, so they contructed
a artificial intensity signal and broadcast that as normal, with the colour
difference information sent in two side channels. That way colour TVs had
to reconstruct the RGB signals, but existing black and white TVs could just
display the intensity signal as normal.

Now, given they were doing this, they could also take advantage of the fact
that perceptually the colour difference information was less important than
the intensity signal and achieve a kind of analog 'compression' by lowpassing
the colour difference (or 'chroma') signals so they required less bandwidth
than the intensity (or 'luminance') channel. The chroma subsampling of YUV
images common in digital video formats is the digital equivalent of that
lowpass, and functions equally well as a compression step.

Cameras and displays are all natively RGB, and because of the clamping to
fixed output ranges, RGB and YUV have slightly different gamuts: there are
RGB colours that can't be represented in YUV and YUV colours that can't be
represented in RGB. So some information is lost even without the subsampling.

-r


More information about the Theora-dev mailing list