[CELT-dev] Error resilience
Gregory Maxwell
gmaxwell at gmail.com
Thu Mar 17 11:14:54 PDT 2011
On Thu, Mar 17, 2011 at 2:05 PM, Riccardo Micci
<riccardo.micci at cambridgeconsultants.com> wrote:
>
> Hi,
> We're testing CELT (version 0.7.1) error resilience capability. We've used already celtdec packet-loss options. Hence we know what to expect in case of whole packet loss.
> How does Celt respond to a broken encoded packet? Is it always better to discard it and decode the missing frame through decode_lost?
> We have the hardware capability of protecting the frame with multiple CRCs. Hence we might have a rough estimation of how many bits are wrong in the frame (no error correction though), in case of few corrupted bits, is it better to use the current frame for decoding?
The CELT bitstream is designed to be bit error robust — or at least as
robust as we could make it without compromising the compression
significantly.
Part of that effort was that we concentrated the corruption sensitive
parts at the front of the frame so that unequal protection should be
used. In general it's better to decode the frame rather than throw it
out unless the very front part of the frame (first 64 bits or so) is
significantly corrupted.
This graph shows quality (higher is better) in the form of PEAQ ODG as
a function of error position:
https://people.xiph.org/~greg/celt/error.sweep3.png
It also shows lines for the ODGs given by 1:100, 1:1000 packet loss.
ODG isn't a great metric for errored signals so your result may vary,
but this should give you an idea.
More information about the celt-dev
mailing list