[theora-dev] Re: YUV to RGB
shane.stephens at gmail.com
Tue Jan 15 02:52:47 PST 2008
Be aware that the SSE implementation in oggplay is not perfect - in
particular it does not correctly do linear interpolation of 4 pixel values
for chrominance (instead it uses 2 values per pixel). Actually, strictly
speaking I think it's MMX from memory (using fixed-point integer arithmetic
instead of floating-point arithmetic).
Fragment shaders vs. vector-based CPU instructions is an interesting trade
off - on the one hand fragment shaders free up CPU cycles; on the other the
vector-based routine is a fraction of the cost of Theora decoding (I measure
about 11%) and I bet more existing Intel-based machines support MMX than
On Jan 15, 2008 6:48 PM, Ralph Giles <giles at xiph.org> wrote:
> On Mon, Jan 14, 2008 at 10:59:39PM -0800, wesley kiriinya wrote:
> > I'm trying to write an OpenGL application that plays theora frames. SDL
> is not an option. Is there a function/routine that can do this for me
> efficiently or do I have to write my own with a bit of SSE so that the YUV
> to RGB conversion is fast enough. Also wouldn't it be OK if a Theora stream
> could be encoded in RGB instead of YUV or is there already something like
> this? I'm writing a decoder not an encoder.
> If you're doing OpenGL, the best thing is to write a fragment shader to
> do the conversion. In most applications the GPU can more easily spare
> the cycles.
> For one implementation on the host side, see:
> There are many others, of various speed and quality.
> I'd eventually like to add to-from RGB routines to the reference
> implementation, but that's post-1.0.
> theora-dev mailing list
> theora-dev at xiph.org
-------------- next part --------------
An HTML attachment was scrubbed...
More information about the theora-dev