[flac-dev] 16 bits FLAC file data to 32 bit float buffer for CPU processing

Каримов Родион rodionkarimov at yandex.ru
Sat Mar 8 04:49:34 PST 2014


Hello.

I create FLAC file decoding, processing and playing program and have the following question : how to convert FLAC 16 bit file data to 32 bit float buffer for CPU processing? I've already inplemented sound playing and tested it with sine wave - it works without problems; I even made writing into decoding buffer values of sine wave, instead of decoded FLAC file data, and it also works without problems; now I try different approaches to convert FLAC 16 bits sound data to 32 bits float CPU buffer, but they give noise or sound with artefacts. I use Stream Decoder from FLAC C API and my write_callback is as follows :

FLAC__StreamDecoderWriteStatus write_callback ( const FLAC__StreamDecoder *decoder, const FLAC__Frame *frame, const FLAC__int32 * const buffer[], void * client_data ) {
  size_t                                i;

  BYTE *                                ChannelDataBuffer;
  WORD *                                WORDChannelDataBuffer;
  DWORD *                               DWORDChannelDataBuffer;
  int *                                 IntChannelDataBuffer;



  if ( bps == 16 ) {
    ChannelDataBuffer                   = ( BYTE * ) buffer [ 0 ];
    WORDChannelDataBuffer               = ( WORD * ) buffer [ 0 ];
    DWORDChannelDataBuffer              = ( DWORD * ) buffer [ 0 ];
    IntChannelDataBuffer                = ( int * ) buffer [ 0 ];

    for ( i = 0; i < frame -> header.blocksize; i++ ) {
      //FloatFLACDecodingData.LOut [ FloatFLACDecodingData.WriteAddress + i ]                         = ( 1.0f + ( float ) sin ( ( ( double ) ( FloatFLACDecodingData.WriteAddress + i ) / ( double ) TABLE_SIZE ) * M_PI * 2.0 * 1.0 ) ) * 0.4f;
      //FloatFLACDecodingData.LOut [ FloatFLACDecodingData.WriteAddress + i ]                         = float ( ( FLAC__int16 ) buffer [ 0 ] [ i ] ) * 65535.0f;
      //FloatFLACDecodingData.LOut [ FloatFLACDecodingData.WriteAddress + i ]                         = ( float ( ( FLAC__int16 ) buffer [ 0 ] [ i ] ) / 65535.0f - 0.5f ) * 2.0f;
      //FloatFLACDecodingData.LOut [ FloatFLACDecodingData.WriteAddress + i ]                         = ( float ( _byteswap_ushort ( ( FLAC__int16 ) buffer [ 0 ] [ i ] ) ) / 65535.0f - 0.5f ) * 2.0f;

      //FloatFLACDecodingData.LOut [ FloatFLACDecodingData.WriteAddress + i ]                         = ( float ( WORDChannelDataBuffer [ i ] ) / 65535.0f - 0.5f ) * 2.0f;
      //FloatFLACDecodingData.LOut [ FloatFLACDecodingData.WriteAddress + i ]                         = ( float ( _byteswap_ushort ( WORDChannelDataBuffer [ i ] ) ) / 65535.0f - 0.5f ) * 2.0f;
      FloatFLACDecodingData.LOut [ FloatFLACDecodingData.WriteAddress + i ]                         = ( float ( DWORDChannelDataBuffer [ i ] ) / 65535.0f - 0.5f ) * 2.0f;
      //FloatFLACDecodingData.LOut [ FloatFLACDecodingData.WriteAddress + i ]                         = ( float ( _byteswap_ulong ( DWORDChannelDataBuffer [ i ] ) ) / 65535.0f - 0.5f ) * 2.0f;

      //FloatFLACDecodingData.LOut [ FloatFLACDecodingData.WriteAddress + i ]                         = ( float ( IntChannelDataBuffer [ i ] ) / 65535.0f - 0.5f ) * 2.0f;

    } //-for

    FloatFLACDecodingData.WriteAddress  = FloatFLACDecodingData.WriteAddress + frame -> header.blocksize;

  }

  return FLAC__STREAM_DECODER_WRITE_STATUS_CONTINUE;

}



The closes results are with looking on buffer variable as DWORDChannelDataBuffer ( IntChannelDataBuffer and ( FLAC__int16 ) buffer [ 0 ] [ i ] give the same results ), but this approach anyway gives wrong sound - it is somehow to sharp and bright with some "hard edges"; other approaches give very strong noise and some of them give total mess. So, how to convert FLAC 16 bit data into 32 bits float data for CPU buffer, so that it will work in C ++ ?


More information about the flac-dev mailing list