[CELT-dev] CELT bit error sensitivity

Gregory Maxwell gmaxwell at gmail.com
Thu Jun 10 20:44:02 PDT 2010


I am aware of a couple CELT users running the codec across a bit-error
channel where corruption takes the form of flipped bits (such as a raw
wireless link) rather than a packet-loss channel (like ethernet or an
IP network).

CELT has been designed to work reasonably well on both kinds of channel.

You can turn a bit-error channel into a packet-loss channel by
including a CRC across the entire frame, but since CELT was designed
to also work acceptably on a bit error channel this will not give you
the best performance.

This graph shows the quality of CELT at two error rates (1:100 and
1:1000) as a function of the position where the errors occur, along
with additional lines showing the performance for zero loss and packet
loss at the same two rates.

http://myrandomnode.dyndns.org:8080/~gmaxwell/celt/error.sweep3.png

The lower the line, the worse the quality.

Because of the scale of the graph it looks like for some portions of
the frame the errors are completely harmless. But this is an artefact
of the graph scaling: All errored cases sound worse than the
non-errored, but some sound pretty close to the original when compared
to the utter destruction of corrupting the first few bits.

The PEAQ metric used here is not really designed for measuring highly
corrupted signals, but after listening to some of the cases I believe
that it gives a reasonable rough relative measure.  (each of the BER
lines represent processing almost 4000 hours of audio, so using human
listening tests to generate these sorts of graphs is out of the
question :) )

As you can see from this graph, CELT is designed to put the most
important data at the front of the frame.  In this measurement
corruption to any of the initial ~48 bits or so is worse than simply
discarding the packet and allowing the packet loss concealment to
handle it. But for the vast majority of the frame it is better to keep
and use the corrupted data.

As a result, for bit-error channels the CELT developers recommend that
users apply special protection to the initial portion of the packets
(64 bits would be a reasonable round number) while keeping packets
which are corrupted later in the frame.

The importance of providing selective protection instead of completely
discarding whole frames should increase with higher bitrates as the
amount of critical data does not increase much as the bitrate goes
higher and as the error rate increases.

This could be accomplished by selectively applying a CRC only to the
initial portion of a packet or, better, by applying a small error
correcting code such as a reed-solomon code operating over 16
half-bytes and providing two half-bytes of overhead in exchange for
the ability to correct all single bit errors and detect all double bit
errors.

More sophisticated schemes are possible. For example, some modulation
schemes provide sub-symbols with different error properties (e.g.
slices from trellis coded modulation, or distinct OFDM carriers) and
various linear block error correcting can be constructed in a manner
which provides unequal protection.   I hope that CELT users will share
interesting error protection solutions with the list.



More information about the celt-dev mailing list