将AVCaptureAudioDataOutput数据传递到vDSP/Accelerate.framework

jtb*_*des 5 signal-processing core-audio avfoundation ios accelerate-framework

我正在尝试创建一个在麦克风数据上运行FFT的应用程序,因此我可以检查输入中最响亮的频率.

我看到有很多获取音频输入的方法(RemoteIO AudioUnit,AudioQueue服务和AVFoundation),但似乎AVFoundation是最简单的.我有这个设置:

// Configure the audio session
AVAudioSession *session = [AVAudioSession sharedInstance];
[session setCategory:AVAudioSessionCategoryRecord error:NULL];
[session setMode:AVAudioSessionModeMeasurement error:NULL];
[session setActive:YES error:NULL];

// Optional - default gives 1024 samples at 44.1kHz
//[session setPreferredIOBufferDuration:samplesPerSlice/session.sampleRate error:NULL];

// Configure the capture session (strongly-referenced instance variable, otherwise the capture stops after one slice)
_captureSession = [[AVCaptureSession alloc] init];

// Configure audio device input
AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:NULL];
[_captureSession addInput:input];

// Configure audio data output
AVCaptureAudioDataOutput *output = [[AVCaptureAudioDataOutput alloc] init];
dispatch_queue_t queue = dispatch_queue_create("My callback", DISPATCH_QUEUE_SERIAL);
[output setSampleBufferDelegate:self queue:queue];
[_captureSession addOutput:output];

// Start the capture session.   
[_captureSession startRunning];
Run Code Online (Sandbox Code Playgroud)

(加上错误检查,为了便于阅读,此处省略).

然后我实现了以下AVCaptureAudioDataOutputSampleBufferDelegate方法:

- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
       fromConnection:(AVCaptureConnection *)connection
{
    NSLog(@"Num samples: %ld", CMSampleBufferGetNumSamples(sampleBuffer));
    // Usually gives 1024 (except the first slice)
}
Run Code Online (Sandbox Code Playgroud)

我不确定下一步应该是什么.CMSampleBuffer格式究竟描述了什么(以及可以做出哪些假设,如果有的话)?如何vDSP_fft_zrip以最少的额外预处理量获取原始音频数据?(另外,您建议如何验证我看到的原始数据是否正确?)

Tar*_*ark 6

CMSampleBufferRef是一个包含0个或更多媒体样本的opaque类型.文档中有一些模糊:

http://developer.apple.com/library/ios/#documentation/CoreMedia/Reference/CMSampleBuffer/Reference/reference.html

在这种情况下,它将包含一个音频缓冲区,以及样本格式和时序信息的描述等.如果你真的很感兴趣,只需在委托回调中加一个断点并查看.

第一步是获取指向已返回的数据缓冲区的指针:

// get a pointer to the audio bytes
CMItemCount numSamples = CMSampleBufferGetNumSamples(sampleBuffer);
CMBlockBufferRef audioBuffer = CMSampleBufferGetDataBuffer(sampleBuffer);
size_t lengthAtOffset;
size_t totalLength;
char *samples;
CMBlockBufferGetDataPointer(audioBuffer, 0, &lengthAtOffset, &totalLength, &samples);
Run Code Online (Sandbox Code Playgroud)

iPhone麦克风的默认样本格式是线性PCM,具有16位样本.这可能是单声道或立体声,具体取决于是否有外接麦克风.要计算FFT,我们需要有一个浮点向量.幸运的是,有一个加速函数可以为我们进行转换:

// check what sample format we have
// this should always be linear PCM
// but may have 1 or 2 channels
CMAudioFormatDescriptionRef format = CMSampleBufferGetFormatDescription(sampleBuffer);
const AudioStreamBasicDescription *desc = CMAudioFormatDescriptionGetStreamBasicDescription(format);
assert(desc->mFormatID == kAudioFormatLinearPCM);
if (desc->mChannelsPerFrame == 1 && desc->mBitsPerChannel == 16) {
    float *convertedSamples = malloc(numSamples * sizeof(float));
    vDSP_vflt16((short *)samples, 1, convertedSamples, 1, numSamples);
} else {
    // handle other cases as required
}
Run Code Online (Sandbox Code Playgroud)

现在您有一个可以使用的样本缓冲区的浮点向量vDSP_fft_zrip.似乎无法将输入格式从麦克风更改为浮动样本AVFoundation,因此您将无法使用此最后一个转换步骤.我会继续围绕缓冲区在实践中,如果有必要reallocing他们时,一个更大的缓冲区到达,这样,你是不是mallocing并与每位代表回调释放缓冲区.

至于你的上一个问题,我想最简单的方法是注入一个已知的输入并检查它是否给你正确的响应.您可以在麦克风中播放正弦波,并检查您的FFT在正确的频率仓中是否有峰值,就像这样.