[Tremor] Implementing Tremor on low-end ARM

Ethan Bordeaux ethan.bordeaux at gmail.com
Wed Dec 17 09:26:59 PST 2008


Nicholas - I know you said you're not planning on supporting very low
encoding rates, but I just wanted to point out to everyone that encoding
with aoTuVb (v5.5) and q -2 you end up with the largest memory needs I've
seen thus far; 46474 bytes.  I've only tried one file thus far so this
number might grow a bit depending on the input.

One interesting thing I've noticed is that memory usage is mostly (though
not perfectly) constant at a particular encoding rate regardless of input
data.  For instance q 0 seems to always require 34159 bytes and q 5 takes
35651 bytes.  This breaks down a bit at the higher rates where they jump
between a couple different values.  Also, if you decode a very short and
very simple file the memory needs can be lower, but once the input is of
"song length and complexity" these numbers really stabilize.

One question I have is whether or not I need to test on encoders other than
oggenc2 (with and without aoTuVb).  Does anyone know if earlier encoders
malloc'ed for more memory?  Also, does anyone know if there are plans on the
encoder side to change the algorithm such that it will send more codec init
data?  I won't have any control over what files are sent through the final
implementation so I'd really hate to lose compatibility with whatever is
planned for future oggencs.  If the algorithm is updated for wavelets that's
one thing, but if the underlying methods don't change and more memory is
required I'd like to do what I can to cope with this case.

Ethan

2008/11/25 Nicholas Vinen <hb at x256.org>

>  Peter Harris wrote:
>
> Nicholas Vinen wrote:
>
>
>  I too would be interested to know under what circumstances the various
> tables might grow. I tested up to 96kHz 24bit stereo with these buffer
> sizes and it seemed to work fine. Yes, I will only support 1 or 2
> channels since my player only has stereo output anyway.
>
>
>  Lower quality files tend to use longer blocks, and therefore use more
> memory in the decoder. Try encoding a file at -q 0 (or lower). Even
> then, the tables won't be as large as the spec allows. Unfortunately, I
> don't think there are any pathological test files available.
>
> Peter Harris
>
>
>
> OK, I tried encoding a file at every integer quality setting between -1 and
> 10
>
> Interestingly I found memory requirements did increase at lower quality
> levels, but certain buffers grew at higher quality settings too.
>
> I think what I am going to do is change my patch so that pretty much
> everything is allocated out of a single buffer, the only exception being
> those few/small chunks which actually get freed before the end of playback.
> This should (a) simplify it and speed it up a bit and (b) means that you
> only have to care about the maximum total memory usage. The worst case I
> tested is quality -1 which required 46400 bytes. This is definitely in the
> upper limit of what I can spare on a 64K system given that I calculated I
> need at least 12KB and possibly 16KB of audio buffer! Those two alone
> exhaust practically all RAM, and I need space for some file system buffers
> etc. I may end up supporting -q0 through -q10 which, in this case at least,
> reduces the memory requirement to 34412 bytes - a massive difference! -q10
> requires 39835 bytes, and going above this doesn't seem to increase it.
>
>
> I also discovered my comment skipping patch breaks at certain quality
> levels (e.g. -q4). After I fix up the static allocation I'll have a go at
> improving this.
>
>
> Nicholas
>
>
> _______________________________________________
> Tremor mailing list
> Tremor at xiph.org
> http://lists.xiph.org/mailman/listinfo/tremor
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: http://lists.xiph.org/pipermail/tremor/attachments/20081217/9dac82a4/attachment.htm 


More information about the Tremor mailing list