[xiph-commits] r9101 - trunk/dryice
arc at motherfish-iii.xiph.org
arc at motherfish-iii.xiph.org
Thu Mar 31 00:06:11 PST 2005
Author: arc
Date: 2005-03-31 00:06:10 -0800 (Thu, 31 Mar 2005)
New Revision: 9101
Added:
trunk/dryice/HACKING
Log:
This will make it easier to remember what I'm doing if I let this
project slide for a couple months again. Hopefully, that won't happen.
Added: trunk/dryice/HACKING
===================================================================
--- trunk/dryice/HACKING 2005-03-31 07:39:00 UTC (rev 9100)
+++ trunk/dryice/HACKING 2005-03-31 08:06:10 UTC (rev 9101)
@@ -0,0 +1,76 @@
+Here's the basic strategy for how DryIce is built:
+
+src/core.c contains the core functionality of DryIce. Think: backend.
+
+Initially, core.c also contains main(), but this is just a placeholder.
+Frontends will eventually be written for ncurses, gtk, python, etc which
+will allow the user to setup and control DryIce. These frontends will
+each contain their own main() and link core.c for the base functions.
+
+Since DryIce is intended to eventually support multiple concurrent
+streams it does not make sense to have core.c be a dynamic library, and
+since the frontends are quite diverse in how they operate making them
+seperate programs would make integration more difficult.
+
+Currently I am working with a OV519 based camera made by DLink. This is
+supported by the v4l_jpeg input module for video frames, eventually
+audio as well as this camera has a built-in microphone.
+
+
+THOUGHT PROCESS (my in-development brainstorming):
+
+core.c will process:
+
+1) Initialize
+ a) load appropriate input module(s)
+ b) connect to Icecast with libshout
+ c) init camera and precache first video frame
+ d) init microphone and precache some audio data
+
+2) Run
+ a) decode YUV data to theora buffer
+ b) decode PCM data to vorbis buffer
+ c) tell camera to fetch next frame
+ d) encode theora frame
+ e) encode vorbis data
+ f) grab available pages, mux them properly
+ g) send (raw) to libshout
+ -- repeat
+
+Now the sequence of these is important for flexibility. We want, for
+example, to be able to display the raw YUV data locally in the GUI
+client without having to decode the outputted stream, and Python may
+(ie) want to do some processing to see how similar two frames are and
+switch to a prerecorded scene if there's no motion for several seconds,
+or how bright the frame is so it can automatically turn on a light if
+it's too dark (via X10, etc).
+
+This means that the "run" sequence needs to be broken down to seperate
+pieces which must be called in order rather than be a continuous loop.
+
+IE (working draft):
+
+Fetch():
+ a) decode YUV data to theora buffer
+ b) decode PCM data to vorbis buffer
+ c) tell camera to fetch next frame
+
+Encode():
+ d) encode theora frame
+ e) encode vorbis data
+ f) grab available pages, mux them properly
+
+Send():
+ g) send (raw) to libshout
+
+We also may want to seperate the audio and video processes for extra
+flexibility. IE, one channel could be audio-only but draw off the same
+encoded source. This moves step f) into Send().
+
+Additionally, we want to make sure that the decoded data from Fetch(),
+the encode channels in Encode(), and the shout connections in Send() are
+kept seperate. This allows us to, for example, re-use the same camera
+input data in two seperate video Encode() calls for two different
+bitrates, then send these to two seperate libshout connections with
+seperate Send() calls.
+
More information about the commits
mailing list