<html>
<head>
<meta http-equiv="content-type" content="text/html; charset=iso-8859-2">
</head>
<body text="#000000" bgcolor="#FFFFFF">
HDR and higher bit-depth seem to be coming:<br>
<br>
<a class="moz-txt-link-freetext" href="http://www.dvinfo.net/article/misc/science_n_technology/hpa-tech-retreat-2014-day-4.html">http://www.dvinfo.net/article/misc/science_n_technology/hpa-tech-retreat-2014-day-4.html</a>
section "Better Pixels: Best Bang for the Buck?"<br>
<br>
<a class="moz-txt-link-freetext" href="http://www.dvinfo.net/article/misc/science_n_technology/hpa-tech-retreat-2014-day-1-poynton-watkinson.html">http://www.dvinfo.net/article/misc/science_n_technology/hpa-tech-retreat-2014-day-1-poynton-watkinson.html</a><br>
<ul>
<li>industry seems to use 12-14 bits today, consensus seems to be
at least 12 bits of luma is needed soon even for consumers;
prosumer camcorders (e.g. Sony PXW-X70 - $2000) are doing 10-bit
4:2:2 1080p59.94 today, and anything above $2500-3000 seems to
be 12 bit or above<br>
</li>
<li>looks like 13 bits would be sufficient with a simple log
curve, Dolby is proposing 12 bits with their "Perceptual
Quantization" curve</li>
<li>some armchair thinking (just my pebbles to throw into the
thinking pool):</li>
<ul>
<li>log encoding would have the benefit that the traditional
color difference signals derived from log encoded RGB
components would eliminate (white-balanced) intensity changes
(e.g. shadows, fades/dips to-from black) from color channels</li>
<li>with intensity decoupled from color:</li>
<ul>
<li>considerably lower color precision could be sufficient (since
cosine falloff from object curvature, lens vignetting,
primary light shadows no longer leak into chroma, no longer
forcing it to have comparable precision to eliminate banding
on final result)<br>
</li>
</ul>
<ul>
<li>maybe replace color differences with more preceptual
metrics: some kind of saturation and hue</li>
<ul>
<li>could allow heavier quantization still or even lower
color precision outright (on the assumption that hue and
saturation changes much less in well-lit, well-exposed,
real life scenes)<br>
</li>
<li>think of it like one aspect of reverse Phong shading:
shiny sphere in vacuum under white light only ever has
it's own hue - only intensity and saturation changes
(cosine falloff: towards black, highlight: towards white;
hue channel is quasi constant, flat; real world will be
messier e.g. hue will be pulled away by light reflected
from surrounding objects - but see below on
illumination/object color decomposition)<br>
</li>
</ul>
</ul>
<li>once chroma/color precision is lowered, it might make sense
to go 4:4:4 all the time and just don't bother with chroma
down/upsampling at all</li>
<li>establish the scene/discussion for scene decomposition: e.g.
separately coding albedo (object reflectance) and illuminance</li>
<ul>
<li>the first step could be a separate
illuminance/intensity/gain channel, that factors (multiply
in linear light = addition in log light) into the final
intensity of the output pixels</li>
<li>encoders unwilling to utilize this can leave this channel
blank at 0dB/0EV/1x<br>
</li>
<li>simplistic encoders could benefit:</li>
<ul>
<li>dips to/from black could preserve full color in main
channels, and only adjust this channel</li>
<li>crossfades could ramp up/down this channel while
referencing main channels at the two frames at both ends
of the crossfade (weighed prediction in linear light
conceptually)<br>
</li>
</ul>
<li>advanced encoders: separately encoding high amplitude
scene illuminance variations from lower ampltude object
reflectance/texture might provide coding gains, especially
in the case of HDR<br>
</li>
<ul>
<li>scene illuminance: higher amplitude, but less details
(mostly "broad strokes" - different statistics than main
channels)<br>
</li>
<li>object reflectance/texture (main channels): smaller
amplitude, but more details</li>
<li>separate prediction/motion compensation for these two<br>
</li>
<li>ideally, scene illuminance should be color as well, to
predict coloured lighting (e.g. illuminance in single
off-white or multiple lightsource cases)</li>
<li>use it as hints for HDR tonemapping tool (see still
photography research)<br>
</li>
</ul>
<li>next step could be to add a highlight layer (kinda like
specular highlights in reverse Phong shading - gradually
blending away the area around the highlight position into
some color of the light source, whether it's transform
coding or some kind of shape based parametric modelling),
there exists machine color vision research in these
directions<br>
</li>
<li>doesn't need to be perfect (it's just prediction after
all) or even cover many cases - just go for some low-hanging
fruits, enough to spark industry discussion/experimentation<br>
</li>
</ul>
</ul>
</ul>
<br>
<br>
<br>
</body>
</html>