[Vorbis-dev] Huge VQ codebooks
dbalatoni at programozo.hu
Mon Feb 20 13:15:04 PST 2006
hétfő 20 február 2006 15.12-kor parul ezeket a bolcs gondolatokat fogalmazta
> Does anybody know how codebooks are generated in OggVorbis encoder? We
I don't know, but I suppose the codeword lengths are generated from some
samples that model the distribution of real world music - I would like to
find this out myself also.
> are porting oggorbis encoder on embedded platform for which VQ codebook
> memory is hugeeee to imagine. How can we reduce that? Can we do VQ with
> less codebooks and if yes how? If any help available?
The codebooks aren't very complicated. Vectors of different residue classes
represent residue in the range -2^n <= x <= 2^n where x is integer and n =
0,1,3,4,5,6,12 (there maybe more at high quality). In stereo streams there
are residue classes for mono residue data (ie. two channels are the same),
stereo residue data (channels can be anything in the given range
independently, these appear only in the +-2^0 and +-2^1 range ), and "limited
difference" stereo where the channel (left or right) with the larger absolute
value is in the range given above, but the difference between the two
channels is limited by (total range/2). Mono streams are supposedly simpler.
For better efficiency each vector has a codeword lengths corresponding to
it's probability (so it's probably random), and this is what needs to be
stored in memory, but for more memory friendly and less efficient (and
slower) compression these could also be calculated from the values in the
I am not sure what you were asking, but what I have written is only what is
obvious after looking at the codebooks of a vorbis file ;).
> Embedded Engineer
> Einfochips Ltd
What kills me, doesn't make me stronger.
More information about the Vorbis-dev