从IO单元将音频写入磁盘

dub*_*eat 10 iphone core-audio audiounit ios

重写这个问题会更有吸引力.

我的问题是我无法从远程IO单元成功地将音频文件写入磁盘.

我采取的步骤是

打开一个mp3文件并将其音频提取到缓冲区.我根据图表的属性设置了一个asbd用于我的图形.我设置并运行我的图形循环提取的音频和声音成功地从扬声器出来!

我遇到的困难是从远程IO回调中获取音频样本并将其写入磁盘上的音频文件,我正在使用ExtAudioFileWriteASync.

音频文件确实被写入并且与原始mp3有一些可听见的相似之处,但听起来非常扭曲.

我不确定问题是否存在

A)ExtAudioFileWriteAsync无法像io单元回调提供的那样快速写入样本.

  • 要么 -

B)我已经为extaudiofile refeference设置了ASBD错误.我想先保存一个wav文件.我不确定我是否在下面的ASBD中正确描述了这一点.

其次,我不确定在创建音频文件时为inChannelLayout属性传递什么值.

最后我非常不确定要用于kExtAudioFileProperty_ClientDataFormat的asbd.我一直在使用我的立体声流格式,但仔细看看文档说这必须是pcm.它应该与remoteio的输出格式相同吗?如果是这样,我错误地将远程io的输出格式设置为stereostreamformat?

我意识到这个问题有很多,但我有很多不确定因素,我似乎无法自行解决.

设置立体声流格式

- (void) setupStereoStreamFormat
{

    size_t bytesPerSample = sizeof (AudioUnitSampleType);
    stereoStreamFormat.mFormatID          = kAudioFormatLinearPCM;
    stereoStreamFormat.mFormatFlags       = kAudioFormatFlagsAudioUnitCanonical;
    stereoStreamFormat.mBytesPerPacket    = bytesPerSample;
    stereoStreamFormat.mFramesPerPacket   = 1;
    stereoStreamFormat.mBytesPerFrame     = bytesPerSample;
    stereoStreamFormat.mChannelsPerFrame  = 2;                    // 2 indicates stereo
    stereoStreamFormat.mBitsPerChannel    = 8 * bytesPerSample;
    stereoStreamFormat.mSampleRate        = engineDescribtion.samplerate;

    NSLog (@"The stereo stereo format :");


}
Run Code Online (Sandbox Code Playgroud)

使用立体声流格式设置remoteio回调

AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit, 
                             kAudioUnitProperty_StreamFormat, 
                             kAudioUnitScope_Output, 
                             masterChannelMixerUnitloop, 
                             &stereoStreamFormat, 
                             sizeof(stereoStreamFormat));

        AudioUnitSetProperty(engineDescribtion.masterChannelMixerUnit, 
                             kAudioUnitProperty_StreamFormat, 
                             kAudioUnitScope_Input, 
                             masterChannelMixerUnitloop, 
                             &stereoStreamFormat, 
                             sizeof(stereoStreamFormat));




static OSStatus masterChannelMixerUnitCallback(void *inRefCon, 
                              AudioUnitRenderActionFlags *ioActionFlags, 
                              const AudioTimeStamp *inTimeStamp, 
                              UInt32 inBusNumber, 
                              UInt32 inNumberFrames, 
                              AudioBufferList *ioData)

{

   // ref.equnit;
    //AudioUnitRender(engineDescribtion.channelMixers[inBusNumber], ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);
    Engine *engine= (Engine *) inRefCon;
    AudioUnitRender(engineDescribtion.equnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);

    if(engine->isrecording)
    {
        ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);

    }


    return 0;

}
Run Code Online (Sandbox Code Playgroud)

**录音设置**

-(void)startrecording

{

    NSArray  *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
    NSString *documentsDirectory = [paths objectAtIndex:0];
    destinationFilePath = [[NSString alloc] initWithFormat: @"%@/testrecording.wav", documentsDirectory];
    destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);

    OSStatus status;

    // prepare a 16-bit int file format, sample channel count and sample rate
    AudioStreamBasicDescription dstFormat;
    dstFormat.mSampleRate=44100.0;
    dstFormat.mFormatID=kAudioFormatLinearPCM;
    dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
    dstFormat.mBytesPerPacket=4;
    dstFormat.mBytesPerFrame=4;
    dstFormat.mFramesPerPacket=1;
    dstFormat.mChannelsPerFrame=2;
    dstFormat.mBitsPerChannel=16;
    dstFormat.mReserved=0;

    // create the capture file
  status=  ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType, &dstFormat, NULL, kAudioFileFlags_EraseFile, &recordingfileref);
    CheckError( status ,"couldnt create audio file");
        // set the capture file's client format to be the canonical format from the queue
 status=ExtAudioFileSetProperty(recordingfileref, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &stereoStreamFormat);

    CheckError( status ,"couldnt set input format");

    ExtAudioFileSeek(recordingfileref, 0);
 isrecording=YES;

  // [documentsDirectory release];


}
Run Code Online (Sandbox Code Playgroud)

编辑1

我现在真的在黑暗中刺伤但是我需要使用音频转换器还是kExtAudioFileProperty_ClientDataFormat会照顾它?

编辑2

我附加了2个音频样本.第一个是我循环并尝试复制的原始音频.第二个是该循环的录制音频.希望它可能会让某人知道什么是错的.

原始的mp3

录制mp3的问题

dub*_*eat 9

经过几天的眼泪和拔毛,我有一个解决方案.

在我的代码和其他示例中,我看到extaudiofilewriteasync在remoteio单元的回调中被调用,就像这样.

**remoteiounit回调**

static OSStatus masterChannelMixerUnitCallback(void *inRefCon, 
                              AudioUnitRenderActionFlags *ioActionFlags, 
                              const AudioTimeStamp *inTimeStamp, 
                              UInt32 inBusNumber, 
                              UInt32 inNumberFrames, 
                              AudioBufferList *ioData)

{


    AudioUnitRender(engineDescribtion.equnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);


    if(isrecording)
    {
        ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);


    }



    return 0;

}
Run Code Online (Sandbox Code Playgroud)

在这个回调中,我从另一个应用eqs并混合音频的音频单元中提取音频数据.

我从远程回调中删除了extaudiofilewriteasync调用到remoteio拉出的另一个回调,文件写成功!!

*等于回调函数*

static OSStatus outputCallback(void *inRefCon, 
                               AudioUnitRenderActionFlags *ioActionFlags, 
                               const AudioTimeStamp *inTimeStamp, 
                               UInt32 inBusNumber, 
                               UInt32 inNumberFrames, 
                               AudioBufferList *ioData) {  


    AudioUnitRender(engineDescribtion.masterChannelMixerUnit, ioActionFlags, inTimeStamp, 0, inNumberFrames, ioData);

   //process audio here    

    Engine *engine= (Engine *) inRefCon;


    OSStatus s;

    if(engine->isrecording)
    {
        s=ExtAudioFileWriteAsync(engine->recordingfileref, inNumberFrames, ioData);


    }


    return noErr;

}
Run Code Online (Sandbox Code Playgroud)

为了充分理解为什么我的解决方案有效,有人可以向我解释为什么从remoteio的iodata缓冲列表中将数据写入文件会导致音频失真,但是在链中再向下写入数据会产生完美的音频?