[Theora-dev] Re: PC-based video server
Andrey Filippov
theora at elphel.com
Wed Jan 26 15:27:03 PST 2005
> This is the tricky part because, as you describe below, the theora format
> video needs to be transcoded into something like MJPEG format AND streamed
> at
> the same time. I really don't know what kind of CPU power you need to do
> that. If it was a single network camera, I suspect that a high-end PC
> today
> could do the job, but 10-20 network cameras, that almost sounds like a
> cluster, or a rack, of PCs to me.
Yes, playing MJPEG from our current model 313 cameras requires 2.5-3GHz
processor to keep pace with the camera. But I am not talking about
recoding everything - I'm counting that in most cases there will be just
one operator who will watch the cameras (remember - full resolution
recording is still running), so the total number of pixels he/she sees at
any given moment will be not more that the computer screen. So I propose
to recompress an send as multicast just DC components from each of 10-20
streams as 256*192 small windows. That should be easy - as you do not need
even huffman-decode all the incoming Theora stream - just Huffman-decode
(no resource-hungry IDCT) DC (first) dta from each stream and jump to he
next frame. And you have to read all DC components from Theora anyway,
even if you need subwindow. Then, in addition to o these 10-20 multicast
MJPEG 256x192 streams server can recode several more according to "digital
PTZ" request - selected window and resolution. Maybe just 1-2-4-8 to use
abbreviated DCT.
>
> I think the only way you can proceed is to record some video with your
> network camera while the FPGA encodes it into theora format. It could be
> as
> simple as someone walking or running in front of the camera. That will
> give
> you the bits per second and bytes per minute for the quality that your
> customers require. Then you will also be able to determine how much
> processing power you need to convert that theora format video realtime
> into
> MJPEG format.
Unfortunately I just don't have this time. When testing FPGA so far I was
manually (using several programs and cat running on the PC after wget the
data from the camera) making ogg encapsulation and adding Theore headers
to the frame data.
And I need to provide response to RFI in just 2 days :-(
> Considering the constraints of the legacy software your customers are
using,
> what you are considering makes sense to me. I hope all of it can be
done at
> a practical price. Eliminating the intermediate formats would simply
things
> a lot. Good luck.
> John
I believe that requirement is quite common and will be useful in any case,
even if not for these particular customer:
1. most use some kind of legacy (or new - but still of the same kind)
software and that will likely stay so for some time;
2. Storing all videodata locally (in cluster) and resending farther only
some portion will be useful even if that resending will later be also in
the Theora format - it can just combine the requirement of saving all the
data and not saturating the networks in big systems - anyway human
operators have limited field of view and can monitor just a fraction of
the whole data.
3. It is really difficult to implement everything it a short time. So
don't caring about user interface and non-camera issues of complete
security systems can make the job doable.
More information about the Theora-dev
mailing list