[Theora] FPGA implementation in the camera - IDCT precision
Andrey Filippov
theora at elphel.com
Tue Oct 19 00:17:51 PDT 2004
I understand that IDCT algorithms in both encoder and decoder should
produce exactly the same results. What I wonder is - why the precision of
sinus/cosinus coefficients is selected so high? Isn't it an overkill for
the 8 bit pixel values?
Of course there are no problems in the software decoder where CPU is 32
bits at least, but each extra bit is a significant load in the FPGA
implementation of the encoder. The DCT I currently use for JPEG encoding
starts with just 9 bits on the input and register width is increasing to
12 bits to the the last stage. Did anybody ever experimented with the
precision that is really needed? I understand that now it is too late to
change that now but I'm still curious where the values came from.
In general I believe the implementation is going well - I consider to have
about 1/3 of the job done. 8-channel DDR SDRAM controller is already coded
and simulated - it had to be rather efficient as I need average 420MB/s of
the peak bandwidth of 480MB/s memory bandwidth with the data sequence
rather complex (scan/coded/... order)
More information about the Theora
mailing list