如何录制混音器单元输出产生的声音(iOS Core Audio&Audio Graph)

lef*_*kir 14 iphone core-audio audio-recording audiounit ios

我正在尝试录制混音器单元输出产生的声音.

目前,我的代码基于Apple MixerHost iOS应用程序演示:混音器节点连接到音频图形上的远程IO节点.

我尝试在混音器输出上的远程IO节点输入上设置输入回调.

我做错了什么,但我找不到错误.

这是下面的代码.这是在多通道混音器单元设置之后完成的:

UInt32 flag = 1;

// Enable IO for playback
result = AudioUnitSetProperty(iOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Output, 
                              0, // Output bus
                              &flag, 
                              sizeof(flag));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty EnableIO" withStatus: result]; return;}

/* can't do that because *** AudioUnitSetProperty EnableIO error: -1073752493 00000000
result = AudioUnitSetProperty(iOUnit, kAudioOutputUnitProperty_EnableIO, kAudioUnitScope_Input, 
                              0, // Output bus
                              &flag, 
                              sizeof(flag));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty EnableIO" withStatus: result]; return;}
*/
Run Code Online (Sandbox Code Playgroud)

然后创建一个流格式:

// I/O stream format
iOStreamFormat.mSampleRate          = 44100.0;
iOStreamFormat.mFormatID            = kAudioFormatLinearPCM;
iOStreamFormat.mFormatFlags         = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
iOStreamFormat.mFramesPerPacket     = 1;
iOStreamFormat.mChannelsPerFrame    = 1;
iOStreamFormat.mBitsPerChannel      = 16;
iOStreamFormat.mBytesPerPacket      = 2;
iOStreamFormat.mBytesPerFrame       = 2;

[self printASBD: iOStreamFormat];
Run Code Online (Sandbox Code Playgroud)

然后影响格式并指定采样率:

result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 
                              1, // Input bus 
                              &iOStreamFormat, 
                              sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty StreamFormat" withStatus: result]; return;}

result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 
                              0, // Output bus 
                              &iOStreamFormat, 
                              sizeof(iOStreamFormat));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty StreamFormat" withStatus: result]; return;}

// SampleRate I/O 
result = AudioUnitSetProperty (iOUnit, kAudioUnitProperty_SampleRate, kAudioUnitScope_Input,
                               0, // Output
                               &graphSampleRate,
                               sizeof (graphSampleRate));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty (set I/O unit input stream format)" withStatus: result]; return;}
Run Code Online (Sandbox Code Playgroud)

然后,我尝试设置渲染回调.

解决方案1 ​​>>>我的录音回调永远不会被调用

effectState.rioUnit = iOUnit;

AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc        = &recordingCallback;
renderCallbackStruct.inputProcRefCon  = &effectState;
result = AudioUnitSetProperty (iOUnit, kAudioUnitProperty_SetRenderCallback, kAudioUnitScope_Input,
                               0, // Output bus
                               &renderCallbackStruct,
                               sizeof (renderCallbackStruct));
if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty SetRenderCallback" withStatus: result]; return;}
Run Code Online (Sandbox Code Playgroud)

解决方案2 >>>我的应用程序在启动时崩溃了

AURenderCallbackStruct renderCallbackStruct;
renderCallbackStruct.inputProc        = &recordingCallback;
renderCallbackStruct.inputProcRefCon  = &effectState;

result = AUGraphSetNodeInputCallback (processingGraph, iONode,
                                       0, // Output bus
                                       &renderCallbackStruct);
if (noErr != result) {[self printErrorMessage: @"AUGraphSetNodeInputCallback (I/O unit input callback bus 0)" withStatus: result]; return;}
Run Code Online (Sandbox Code Playgroud)

如果有人有想法......

编辑解决方案3(感谢arlo anwser)>>现在存在格式问题

AudioStreamBasicDescription dstFormat = {0};
dstFormat.mSampleRate=44100.0;
dstFormat.mFormatID=kAudioFormatLinearPCM;
dstFormat.mFormatFlags=kAudioFormatFlagsNativeEndian|kAudioFormatFlagIsSignedInteger|kAudioFormatFlagIsPacked;
dstFormat.mBytesPerPacket=4;
dstFormat.mBytesPerFrame=4;
dstFormat.mFramesPerPacket=1;
dstFormat.mChannelsPerFrame=2;
dstFormat.mBitsPerChannel=16;
dstFormat.mReserved=0;

result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Output, 
                     1, 
                     &stereoStreamFormat, 
                     sizeof(stereoStreamFormat));

if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}


result = AudioUnitSetProperty(iOUnit, kAudioUnitProperty_StreamFormat, kAudioUnitScope_Input, 
                     0, 
                     &stereoStreamFormat, 
                     sizeof(stereoStreamFormat));

if (noErr != result) {[self printErrorMessage: @"AudioUnitSetProperty" withStatus: result]; return;}


AudioUnitAddRenderNotify(
                         iOUnit,
                         &recordingCallback,
                         &effectState
                         );
Run Code Online (Sandbox Code Playgroud)

和文件设置:

if (noErr != result) {[self printErrorMessage: @"AUGraphInitialize" withStatus: result]; return;}

// On initialise le fichier audio
NSArray  *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *destinationFilePath = [[[NSString alloc] initWithFormat: @"%@/output.caf", documentsDirectory] autorelease];
NSLog(@">>> %@", destinationFilePath);
CFURLRef destinationURL = CFURLCreateWithFileSystemPath(kCFAllocatorDefault, (CFStringRef)destinationFilePath, kCFURLPOSIXPathStyle, false);

OSStatus setupErr = ExtAudioFileCreateWithURL(destinationURL, kAudioFileWAVEType, &dstFormat, NULL, kAudioFileFlags_EraseFile, &effectState.audioFileRef);  
CFRelease(destinationURL);
NSAssert(setupErr == noErr, @"Couldn't create file for writing");

setupErr = ExtAudioFileSetProperty(effectState.audioFileRef, kExtAudioFileProperty_ClientDataFormat, sizeof(AudioStreamBasicDescription), &stereoStreamFormat);
NSAssert(setupErr == noErr, @"Couldn't create file for format");

setupErr =  ExtAudioFileWriteAsync(effectState.audioFileRef, 0, NULL);
NSAssert(setupErr == noErr, @"Couldn't initialize write buffers for audio file");
Run Code Online (Sandbox Code Playgroud)

和录音回调:

static OSStatus recordingCallback       (void *                         inRefCon,
                              AudioUnitRenderActionFlags *      ioActionFlags,
                              const AudioTimeStamp *            inTimeStamp,
                              UInt32                            inBusNumber,
                              UInt32                            inNumberFrames,
                              AudioBufferList *                 ioData) {
if (*ioActionFlags == kAudioUnitRenderAction_PostRender && inBusNumber == 0) 
{
    EffectState *effectState = (EffectState *)inRefCon;

    ExtAudioFileWriteAsync(effectState->audioFileRef, inNumberFrames, ioData);
}
return noErr;     
}
Run Code Online (Sandbox Code Playgroud)

输出文件output.caf :)中缺少某些内容.我完全迷失了申请格式.

arl*_*dia 15

我认为您不需要在I/O单元上启用输入.我还会评论您在I/O单元上执行的格式和采样率配置,直到您的回调运行,因为不匹配或不支持的格式可能会阻止音频单元链接在一起.

要添加回调,请尝试以下方法:

AudioUnitAddRenderNotify(
    iOUnit,
    &recordingCallback,
    self
);
Run Code Online (Sandbox Code Playgroud)

显然其他方法将取代节点连接,但此方法不会 - 所以即使您添加了回调,您的音频单元也可以保持连接.

回调运行后,如果发现缓冲区中没有数据(ioData),请将此代码包装在回调代码中:

if (*ioActionFlags == kAudioUnitRenderAction_PostRender) {
    // your code
}
Run Code Online (Sandbox Code Playgroud)

这是必需的,因为以这种方式添加的回调在音频单元呈现其音频之前和之后都会运行,但您只想在呈现后运行代码.

一旦回调运行,下一步就是弄清楚它接收的音频格式并适当地处理它.尝试将此添加到您的回调中:

SInt16 *dataLeftChannel = (SInt16 *)ioData->mBuffers[0].mData;
for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) {
    NSLog(@"sample %lu: %d", frameNumber, dataLeftChannel[frameNumber]);
}
Run Code Online (Sandbox Code Playgroud)

这会使你的应用程序变慢,以至于它可能会阻止任何音频实际播放,但你应该能够运行它足够长的时间来查看样本的样子.如果回调接收16位音频,则样本应为-32000和32000之间的正整数或负整数.如果样本在看起来正常的数字和更小的数字之间交替,请在回调中尝试使用以下代码:

SInt32 *dataLeftChannel = (SInt32 *)ioData->mBuffers[0].mData;
for (UInt32 frameNumber = 0; frameNumber < inNumberFrames; ++frameNumber) {
    NSLog(@"sample %lu: %ld", frameNumber, dataLeftChannel[frameNumber]);
}
Run Code Online (Sandbox Code Playgroud)

这应该显示完整的8.24样本.

如果您可以以回调接收的格式保存数据,那么您应该拥有所需的内容.如果您需要以不同的格式保存它,您应该能够转换远程I/O音频单元中的格式......但是当它连接到多声道时我无法弄清楚如何做到这一点搅拌机组.或者,您可以使用音频转换器服务转换数据.首先,定义输入和输出格式:

AudioStreamBasicDescription monoCanonicalFormat;
size_t bytesPerSample = sizeof (AudioUnitSampleType);
monoCanonicalFormat.mFormatID          = kAudioFormatLinearPCM;
monoCanonicalFormat.mFormatFlags       = kAudioFormatFlagsAudioUnitCanonical;
monoCanonicalFormat.mBytesPerPacket    = bytesPerSample;
monoCanonicalFormat.mFramesPerPacket   = 1;
monoCanonicalFormat.mBytesPerFrame     = bytesPerSample;
monoCanonicalFormat.mChannelsPerFrame  = 1; 
monoCanonicalFormat.mBitsPerChannel    = 8 * bytesPerSample;
monoCanonicalFormat.mSampleRate        = graphSampleRate;

AudioStreamBasicDescription mono16Format;
bytesPerSample = sizeof (SInt16);
mono16Format.mFormatID          = kAudioFormatLinearPCM;
mono16Format.mFormatFlags       = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
mono16Format.mChannelsPerFrame  = 1;
mono16Format.mSampleRate        = graphSampleRate;
mono16Format.mBitsPerChannel    = 16;
mono16Format.mFramesPerPacket   = 1;
mono16Format.mBytesPerPacket = 2;
mono16Format.mBytesPerFrame = 2;
Run Code Online (Sandbox Code Playgroud)

然后在回调之外的某处定义转换器,并在转换期间创建用于处理数据的临时缓冲区:

AudioConverterRef formatConverterCanonicalTo16;
@property AudioConverterRef formatConverterCanonicalTo16;
@synthesize AudioConverterRef;
AudioConverterNew(
    &monoCanonicalFormat,
    &mono16Format,
    &formatConverterCanonicalTo16
);

SInt16 *data16;
@property (readwrite) SInt16 *data16;
@synthesize data16;
data16 = malloc(sizeof(SInt16) * 4096);
Run Code Online (Sandbox Code Playgroud)

然后在保存数据之前将其添加到回调中:

UInt32 dataSizeCanonical = ioData->mBuffers[0].mDataByteSize;
SInt32 *dataCanonical = (SInt32 *)ioData->mBuffers[0].mData;
UInt32 dataSize16 = dataSizeCanonical;

AudioConverterConvertBuffer(
    effectState->formatConverterCanonicalTo16,
    dataSizeCanonical,
    dataCanonical,
    &dataSize16,
    effectState->data16
);
Run Code Online (Sandbox Code Playgroud)

然后,您可以保存16位格式的数据16,可能是您希望保存在文件中的数据.它将与规范数据更兼容并且一半大.

当你完成后,你可以清理几件事:

AudioConverterDispose(formatConverterCanonicalTo16);
free(data16);
Run Code Online (Sandbox Code Playgroud)