[Theora-dev] Theora camera/fast computers
theora at elphel.com
Tue Mar 22 22:06:54 PST 2005
> Right. Good to hear some improvement is possible.
Probably any algorithm that uses 5x5 (20x20 for 16x16 macroblock) pixels
to restore YCbCr can be implemented - 1/3 of the general resources of the
FPGA are still free.
> The effect I was
> referring to had more to do with the noise not changing from frame
> to frame. Does that mean it's a deterministic effect of the colour
> interpolation, or just the way you detect motion?
For now there is no motion detection, and no selective block coding is
tested - in that clips there are just 2 types of frames. Can you save the
frame with this noise and mark it somehow? Then I'll probably be able to
comment - there is still a possibility that not everything is correct. For
several frames I verified that YUV dump on the decoder side matches
bit-to-bit the internal reference frame, but still something might be
> Unrelated, do you have any plans to work toward one of the two specified
> colourspaces in the spec?
To be honest - I haven't looked there yet :-) focusing on the compressor
part that was so difficult to troubleshoot - so many possible branches in
computations with the 8-channel memory controller serving concurrent
requests (so with the same data there could be different timing patterns
with some rare combinations of events)
>> > So it just works out that way? Interesting. How fast can the sensor
>> > And can you bin the pixels to read out the full frame at lower
>> > resolution?
>> Yes, but then it is not optimal for the color images or it will need
>> different algorithms as the pixels will not be even.
> Right, it obviously doens't work with the colour mosaic sensors. But for
> grayscale being able to trade off resolution for framerate without
> having to reframe the image is a nice feature.
They do work with CMOS imagers (not with CCD binning) where decimation is
adjusted to colors (i.e decimation by 4 will be - use 2 pixels, skip 6,
use 2 rows - skip 6 - there are detailed descriptions an illustrations in
the sensor datasheet). And the fact that pixels are not evenly distributed
can be used to correct coefficients in colorspace converter - but now the
algorithm is just designed for the same distance between pixels.
>> Here it is - not really impressive - water was dripping from the roof
>> melting snow and the droplets were out of focus (exposure was 1ms):
I'll think of some more interesting high-speed object to shoot.
BTW, do you think there is a way in the standard how to insert some bits
between the frame header and frame data? Or find a trick how to adjust the
frame header length? That could really help hardware implementations.
The data is going from FPGA in 32-bit words, and all the headers are built
by software. so I can use only "static" maps of coded blocks for which the
frame headers are calculated in advance by software and FPGA just receives
the number of bits (0..31) to skip before the frame data - in that case
software can merge frame headers (built by software) with the frame data
(received from FPGA) without bit-shifting the whole frame. I'm planning to
implement frame header generation by FPGA too, but still it could be much
easier if merging software+hardware outputs was easier.
More information about the Theora-dev