[Speex-dev] Fwd: Re: Fixed Point on wideband-mode: Single Frame loss on 2000 Hz sine causes "freak off"

Frank Lorenz Frank_wtal at web.de
Fri Feb 5 02:36:57 PST 2010


Hi Jean-Marc,

I did what you proposed. I changed the levinson durbin algorithm to:


{
   int i, j;  
   spx_word16_t r;
   spx_word16_t error = ac[0];

   for (i = 0; i < p; i++)
       lpc[i] = 0;

   if (ac[0] == 0)
   {
      //for (i = 0; i < p; i++)
      //   lpc[i] = 0;
      return 0;
   }

   for (i = 0; i < p; i++) {

      /* Sum up this iteration's reflection coefficient */
      spx_word32_t rr = NEG32(SHL32(EXTEND32(ac[i + 1]),13));
      for (j = 0; j < i; j++) 
         rr = SUB32(rr,MULT16_16(lpc[j],ac[i - j]));
#ifdef FIXED_POINT
	  // stop calculation if error < 30
	  if ( error <= 30 ) {
	  	 return error;
	  }
      //r = DIV32_16(rr+PSHR32(error,1),ADD16(error,10 ));
	  r = DIV32_16(rr+PSHR32(error,1),error);
	  
#else
      r = rr/(error+.003*ac[0]);
#endif


This improves the situation. There's no more "freak out" for most cases. I tested with 2000 Hz, 2200 Hz and 3000 Hz input for different complexity and quality settings. Nevertheless, for 2200 Hz, quality 7 complexity 3, there's still this horrible overdrive.
It is also interesting that the limit (30 in my case) for stopping the iteration is very important. While even a limit of 0 is o.k. for 2000 and 2200 Hz signals, this will not work for 3000 Hz. If I put the limit up to 100, 2200 Hz signals cause the "freak out" again.

Another point is that beside the "freak out", there is still distortion in the re-synthesized signal for harmonic inputs - a strong amplitude modulation, in some cases with a clear period equal to have the frame rate (160 samples). You can reduce this by setting the complexity to higher values (5 and above work quite well).

A third point is a time-variable and sometimes very slow "fade in" of the re-synthesized signal after a frame loss. The steepness of the "fade in" is varying. It is depenent on some parameter I cannot find (changing complexity/quality or even the limit inside the levinson-durbin algorithm changes it quite chaotically).

So we are on the right track, but did not reach the goal up to now ;-)

Do you think it is a good idea to but the levinson-durbin algorithm to a higher precision? Or do you have some other idea how to proceed?

best regards,
Frank



   Jean-Marc Valin <Jean-Marc.Valin at USherbrooke.ca> hat am 4. Februar
 2010 um 15:47 geschrieben:

> Now that's an interesting 
analysis! Thanks a lot for spending the time to dig
> into this. I
 now think the whole idea of adding a small value to the error was
>
 misguided from the beginning. Instead, what the code should probably do
 is just
> stop once the error has reached a small enough value 
(and set the remaining LPC
> coefs to zero).
> 
>    
 Jean-Marc
___________________________________________________________
GRATIS für alle WEB.DE-Nutzer: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://movieflat.web.de


More information about the Speex-dev mailing list