[theora-dev] Theora on iPhone
Timothy B. Terribrery
tterribe at xiph.org
Tue Oct 7 21:02:09 PDT 2008
Mihai Balea wrote:
> Conversion from YUV to RGB is probably an order of magnitude less
> intensive than Theora decoding, so if you got that working in realtime
This is not as true as you think. Large portions of the Theora decoder
operate on the compressed data, which requires significantly less memory
bandwidth than that needed to access the full video frame, and memory
bandwidth is often the bottleneck. Even for the steps that do operate on
the full frame are organized to do so mostly from cache. While the
Theora API includes a stripe callback function that can let you perform
the YUV->RGB conversion while the freshly decoded YUV data is still in
cache, the RGB data is twice as large, and you still have to get _that_
into SDL somehow. I could easily see it taking up more than 10% of
decoding time if implemented directly in C.
> - Pixel shaders. If you can offload to GPU, then by all means do it.
> I'm not familiar with the PowerVR MBX chip that's in the iphone to
> know whether it has the chops, but I doubt it.
Sadly, AFAIK the iPhone only supports OpenGL ES 1.1. Version 2.0 is
required for pixel shaders.
> You might actually want to profile the last two, I'm not sure that the
> SIMD version would be so much faster...
Straightforward C does relatively poorly with such code. There's lots of
excess overhead due to reading data a byte a time, promoting all
intermediate calculations to ints, etc.
Doesn't the iPhone support some adaptation of Quicktime? I would assume
that this is the way one is expected to do video playback on it.
More information about the theora-dev