I would also add that the "level" is only relevant to the reference encoder, as it is an alias to quickly set a number of parameters. Other encoders might make different choices. <div>Also, I haven't looked in details at how the encoder works, but I think it is based on heuristics to decide which is the best compression method for a set of samples, and I'm not sure that it would give twice the same results for encoding the same file, although those results would be statistically similar...<br clear="all">
--<br>Pierre-Yves Thoulon<br><br>
<br><br><div class="gmail_quote">On Wed, Mar 23, 2011 at 22:33, Brian Willoughby - <a href="mailto:brianw@sounds.wa.com">brianw@sounds.wa.com</a> <span dir="ltr"><+flac-dev+pyt+81a7216403.brianw#<a href="http://sounds.wa.com">sounds.wa.com</a>@<a href="http://spamgourmet.com">spamgourmet.com</a>></span> wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">Hjalmar,<br>
<br>
I recall that many hardware decoders have a limit on the level that<br>
they can handle. At the very least, hardware encoders may only<br>
support a subset of the compression levels. As for the decoders, I<br>
cannot remember specifically whether they are all capable of decoding<br>
all compression levels (it certainly would be more convenient).<br>
<br>
My memory from discussions on this mailing list is that the file<br>
format does not have anywhere to store the compression level. The<br>
reason is that compression levels are just human conveniences, and<br>
the actual compression is much more complex than a single digit number.<br>
<br>
Look at the command-line options for flac and you'll see that each<br>
compression level is synonymous with a group of up to five separate<br>
parameters. Maximum LPC Order, blocksize, (adaptive) mid-side<br>
coding, Rice partition order, and an exhaustive model search. I<br>
would assume that the factor affecting your decoder mips is the<br>
model, more than anything else which may be contributing. But you<br>
might have some luck varying those individual parameters instead of<br>
the basic compression level to see if any of them have a direct<br>
effect on your decoding complexity. If one of them does have a<br>
direct effect, then it might be easier to correlate something in the<br>
file headers with your clock speed.<br>
<br>
Perhaps you could just monitor your CPU usage and adapt the clock<br>
speed. Starting at the highest clock speed (in order to guarantee<br>
real-time decoding) you could measure percent CPU usage and drop the<br>
clock speed any time you're using less than half the available time.<br>
<br>
Another possibility is that your clock speed may depend almost<br>
entirely upon the sample rate, not the compression level. It's<br>
difficult for me to tell for certain from your cited examples, but<br>
while it seems clear that the sample rate has a effect on the clock<br>
speed needed, I'm not nearly so sure that different compression<br>
levels for the same sample rate would vary so widely. Perhaps you<br>
can just look in the FLAC headers for the sample rate, and then set<br>
your clock speed based on that, unless you're saying that a 44.1 kHz<br>
level 8 file takes more processing than a 96 kHz level 1 file, or<br>
that a 96 kHz level 8 file takes more processing than a 192 kHz level<br>
1 file. You did imply that 192 kHz takes almost twice the CPU at<br>
level 8 versus level 1, but I would point out that sometimes you can<br>
only adjust your clock speed in large increments, so it may not<br>
actually be possible to drop your clock speed by less than half,<br>
especially not at the highest clock speed (with the way clock<br>
dividers work).<br>
<br>
Brian Willoughby<br>
Sound Consulting<br>
<br>
P.S. What is your embedded platform environment?<br>
<div class="im"><br>
<br>
On Mar 23, 2011, at 10:10, hjalmar nilsson wrote:<br>
> I'm developing a flac decoder in an embedded environment. I have it<br>
> fully up and running but I am trying to optimise the performance<br>
> vs. the power it takes to decode the stream.<br>
><br>
> Using the reference decoder code with a few optimisations for the<br>
> hw I'm on I experience quite a difference in mips depending on the<br>
> level.<br>
> For instance, on a 192kHz/24 bit file the difference is almost that<br>
> it takes twice as many mips on level 8 vs. level 1.<br>
><br>
> Unfortunately I can't tell our customer that they need to decode at<br>
> a certain level, BUT I want to minimize the frequency the processor<br>
> is running on.<br>
> Since I have no problem reading a 441/16 from a memory stick and<br>
> decode it on 40 MHz, but I require almost 240 MHz on a 192/24 file,<br>
> I would like to dynamically change the frequency depending on the<br>
> file I'm about to decode.<br>
><br>
> Now, since you managed to plough through all this, I would really<br>
> appreciate if you could answer one simple question:<br>
> Is there any way I can read out the compression level from the<br>
> encoding process from the file?<br>
<br>
<br>
</div>_______________________________________________<br>
Flac-dev mailing list<br>
<a href="mailto:Flac-dev@xiph.org">Flac-dev@xiph.org</a><br>
<a href="http://lists.xiph.org/mailman/listinfo/flac-dev" target="_blank">http://lists.xiph.org/mailman/listinfo/flac-dev</a><br>
<br>
</blockquote></div><br></div>