[theora-dev] Theora integration question
Engineering
ee at athyriogames.com
Thu Oct 11 07:39:22 PDT 2012
Thank you Ralph (and Benjamin, who also replied)
This is going to be a little long, but the crux of my question is finding
ways to have theora malloc as little as possible.
I am working on an embedded system which needs to run constantly in public
places with no supervision - think arcade video game. Due to slow flash
media, and in the interest of speed, I have been loading in all ogvs needed
at bootup in to RAM, and preprocessing them into header information and
theora frames, so hopefully they have been de-containerized.
I am working with all known data - movies that have been created internally
here.
At run time, I recreate the ogg_packet from my RAM buffer, and update the
movies like so (edited for brevity)
terr = theora_decode_packetin(&zog->td,&zog->op);
terr = theora_decode_YUVout(&zog->td,&zog->yuv); // decode frame
into YUV buffer
// OK, here is where we deviate from the standard example
// Normal program would convert the YUV data to RGB data for display
// That takes a lot of time - we just send the YUV data right to the
graphics card as an 8-bit texture
// And we have a special shader on the graphics card that does the YUV to
RGB conversion on a per pixel basis
//Send yuv.y, yuv.u and yuv.v to graphics card, let it handle it from there
This has been working well and vigorously tested for over a year
The second part, is that I load and initialize all movies at startup - we
are now growing to have around 180 movies, many of them 1920x1080.
Psuedocode for initialize:
theora_comment_init(&zog->tc);
theora_info_init(&zog->ti);
for(i=0;i<3;i++)
{
if
(theora_decode_header(&zog->ti,&zog->tc,&zog->head.op_for_ti[i]))
{
zerr("zog_load: theora_decode_header err
%s\n",filename);
fclose(f);
return ERR_FILE;
}
}
err = theora_decode_init(&zog->td,&zog->ti);
if (err)
{
zerr("zog_load: theora_decode_init err %d
%s\n",err,filename);
zerr("%08X %08X\n",(unsigned int)&zog->td,(unsigned
int)&zog->ti);
fclose(f);
return ERR_FILE;
}
theora_decode_init is where I seem to be running out of memory. I confess
that I do not understand much of the internals of theora. Given that I have
180 (and growing) movies on tap, is there any memory that can be shared
amongst all open movies, instead of individually malloced for each one?
For example, I'm not quite sure what 'dct_tokens' is, but mallocing that in
oc_dec_init() seems to be putting me over the edge. I notice that the size
of those mallocs looks similar to what I'd expect the RGB pixel data to take
for each movie.
Changing
// _dec->dct_tokens=(unsigned char *)_ogg_malloc((64+64+1)*
_dec->state.nfrags*sizeof(_dec->dct_tokens[0]));
To
unsigned char sambuf[MB(8)]; // global
_dec->dct_tokens=(unsigned char *)&sambuf[0];
'seems' to work, but not knowing the internals of theora, makes me nervous
that I have broken something.
To sum up, and thanks for reading this far
1. Given an unusually large number of open movies and a fixed datset, are
there any memory savings to be had by sharing buffers instead of
individually malloc-ing?
2. Since I am working directly with the YUV data, are there any memory
savings to be had dealing with YUV-RGB conversion?
Thanks for any help, and I will be more than happy to provide any
clarification!
Sam
-----Original Message-----
From: Ralph Giles [mailto:giles at thaumas.net]
Sent: Thursday, October 11, 2012 2:56 AM
To: Engineering
Cc: theora-dev at xiph.org
Subject: Re: [theora-dev] Theora integration question
On 12-10-10 1:16 PM, Engineering wrote:
> Hello, I am programmer working on a product which integrates Theora. I
> have a question regarding the memory use on some of the internals of
> Theora. Is this the right forum for this question, and if not, does
> anyone know where an appropriate place to ask is?
Yes, this is an appropriate forum to ask theora programming questions.
Responders are all volunteers, but people generally try to be helpful.
-r
More information about the theora-dev
mailing list