[theora] CPU saving way to lower quality of Theora stream
fboehm at aon.at
Tue May 17 22:39:55 PDT 2011
Am 2011-05-18 01:53, schrieb Benjamin M. Schwartz:
> On 05/17/2011 06:08 PM, fboehm wrote:
>> is there a good concept to lower the quality of a Theora stream without
>> completely reencoding it?
> I think the correct answer is "no". You're probably better off just
> completely re-encoding in your proxy.
>> If you think requantization is also the best (or the only) option for
>> Theora please let me know.
> Requantization is a very interesting idea, and if you pursue it I would
> certainly like to see the results. However, I see a few major issues:
> 1. Drift. If you simply reduce the quality of each frame independently,
> the errors will accumulate over time. A typical Theora stream can have
> 256 consecutive P-frames, so in the worst case the accumulated error can
> be 256 times the per-frame error, meaning some parts of the frame might be
> totally wrong colors.
> 2. Scaling. When reducing bitrate, you will usually get the best quality
> by reducing the resolution before re-encoding (scaling down the video).
> Scaling is impossible without a complete re-encode. Scaling saves encoder
> CPU time*, so it might be faster and higher quality than any
> requantization approach.
> 3. Suboptimal quantization. Quantization in modern encoders is not nearly
> as simple as in the classic MPEG encoders of the 1990s. The libtheora
> encoder has had years of work put into its quantizer/tokenizer to make
> psychovisually optimal quantization decisions. A simple requantizer is
> unlikely to do anywhere near as well.
> In summary, you're probably better off just decoding and re-encoding.
> If you're committed to requantization, I wish you luck. You might also
> want to consider a different approach, in which you run the complete
> encoder as usual, but copy the motion vectors and block modes from the
> input. This could save CPU time by eliminating MV search and mode
> decision, and it avoids the problem of drift.
> Incorporating scaling into such a model is harder, but perhaps you can see
> how it might be extended for scaling by a factor of 2.
> *: Scaling costs CPU time in the scaler, but saves CPU time in the
> encoder. These can run in separate threads, so if you have multiple CPUs
> you can run them in parallel, saving wall-clock time.
Very interesting Ben, thanks alot.
A somehow related question to this topic:
Is it possible to modify the encoder to output streams with
different quality (low, mid, high bitrate) simulatenously without
needing 3 times CPU time? The three streams should perhaps also differ
in resolution and not only bitrate.
More information about the theora