1

I'm trying to get raw audio data from a file (i'm used to seeing floating point values between -1 and 1).

I'm trying to pull this data out of the buffers in real time so that I can provide some type of metering for the app.

I'm basically reading the whole file into memory using AudioFileReadPackets. I've create a RemoteIO audio unit to do playback and inside of the playbackCallback, i'm supplying the mData to the AudioBuffer so that it can be sent to hardware.

The big problem I'm having is that the data being sent to the buffers from my array of data (from AudioFileReadPackets) is UInt32... I'm really confused. It looks like it's 32-bits and I've set the packets/frames to be 4bytes each. How the heck to I get my raw audio data (from -1 to 1) out of this?

This is my Format description

// Describe format
audioFormat.mSampleRate         = 44100.00;
audioFormat.mFormatID           = kAudioFormatLinearPCM;
audioFormat.mFormatFlags        = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
audioFormat.mFramesPerPacket    = 1;
audioFormat.mChannelsPerFrame   = 2;
audioFormat.mBitsPerChannel     = 16;
audioFormat.mBytesPerPacket     = 4;
audioFormat.mBytesPerFrame      = 4;

I am reading a wave file currently.

Thanks!

4

2 に答える 2

1

このコールバックから UInt32 データが返される理由は正確にはわかりませんが、実際には各チャネルに 1 つずつ、2 つのインターレースされた UInt16 パケットであると思われます。とにかく、ファイルから浮動小数点データが必要な場合は、変換する必要があります。@John Ballinger が推奨する方法が正しい方法であるとは確信していません。私の提案は次のとおりです。

// Get buffer in render/read callback
SInt16 *frames = inBuffer->mAudioData;
for(int i = 0; i < inNumPackets; i++) {
  Float32 currentFrame = frames[i] / 32768.0f;
  // Do stuff with currentFrame (add it to your buffer, etc.)
}

フレームを必要な形式に単純にキャストすることはできません。浮動小数点データが必要な場合は、16 ビット サンプルの最大値である 32768 で割る必要があります。これにより、{-1.0 .. 1.0} の範囲の正しい浮動小数点データが生成されます。

于 2010-09-20T07:57:17.127 に答える
0

この関数を見てください...データはSInt16です。

static void recordingCallback (
    void                                *inUserData,
    AudioQueueRef                       inAudioQueue,
    AudioQueueBufferRef                 inBuffer,
    const AudioTimeStamp                *inStartTime,
    UInt32                              inNumPackets,
    const AudioStreamPacketDescription  *inPacketDesc
) {


    // This callback, being outside the implementation block, needs a reference to the AudioRecorder object
    AudioRecorder *recorder = (AudioRecorder *) inUserData;

    // if there is audio data, write it to the file
    if (inNumPackets > 0) {

        SInt16 *frameBuffer = inBuffer->mAudioData;
        //NSLog(@"byte size %i, number of packets %i, starging packetnumber %i", inBuffer->mAudioDataByteSize, inNumPackets,recorder.startingPacketNumber);

        //int totalSlices = 1;
        //int framesPerSlice = inNumPackets/totalSlices;
        float total = 0;
        for (UInt32 frame=0; frame<inNumPackets; frame+=20) {
            total += (float)abs((SInt16)frameBuffer[frame]) ; 
        }
于 2010-09-18T06:10:07.257 に答える