如何在iPhone中使用AudioBuffer编写从麦克风本地录制的音频文件?

Bal*_*ala 5 iphone audio ios audiobuffer

我是Audio框架的新手,有人帮我编写通过麦克风捕获播放的音频文件吗?

下面是通过iphone扬声器播放麦克风输入的代码,现在我想将音频保存在iphone中以备将来使用.

我发现这里的代码用麦克风录制音频 http://www.stefanpopp.de/2011/capture-iphone-microphone/

/**

Code start from here for playing the recorded voice 

*/

static OSStatus playbackCallback(void *inRefCon, 
                                 AudioUnitRenderActionFlags *ioActionFlags, 
                                 const AudioTimeStamp *inTimeStamp, 
                                 UInt32 inBusNumber, 
                                 UInt32 inNumberFrames, 
                                 AudioBufferList *ioData) {    

    /**
     This is the reference to the object who owns the callback.
     */
    AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;

    // iterate over incoming stream an copy to output stream
    for (int i=0; i < ioData->mNumberBuffers; i++) { 
        AudioBuffer buffer = ioData->mBuffers[i];

        // find minimum size
        UInt32 size = min(buffer.mDataByteSize, [audioProcessor audioBuffer].mDataByteSize);

        // copy buffer to audio buffer which gets played after function return
        memcpy(buffer.mData, [audioProcessor audioBuffer].mData, size);

        // set data size
        buffer.mDataByteSize = size; 

         // get a pointer to the recorder struct variable
Recorder recInfo = audioProcessor.audioRecorder;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo.running) {
    audioErr = AudioFileWriteBytes (recInfo.recordFile,
                                    false,
                                    recInfo.inStartingByte,
                                    &size,
                                    &buffer.mData);
    assert (audioErr == noErr);
    // increment our byte count
    recInfo.inStartingByte += (SInt64)size;// size should be number of bytes
    audioProcessor.audioRecorder = recInfo;

     }
    }

    return noErr;
}
Run Code Online (Sandbox Code Playgroud)

- (无效)prepareAudioFileToRecord {

NSArray *paths =             NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;

NSTimeInterval time = ([[NSDate date] timeIntervalSince1970]); // returned as a double
long digits = (long)time; // this is the first 10 digits
int decimalDigits = (int)(fmod(time, 1) * 1000); // this will get the 3 missing digits
//    long timestamp = (digits * 1000) + decimalDigits;
NSString *timeStampValue = [NSString stringWithFormat:@"%ld",digits];
//    NSString *timeStampValue = [NSString stringWithFormat:@"%ld.%d",digits ,decimalDigits];


NSString *fileName = [NSString stringWithFormat:@"test%@.caf",timeStampValue];
NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
NSURL *fileURL = [NSURL fileURLWithPath:filePath];
// modify the ASBD (see EDIT: towards the end of this post!)
audioFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileCAFType,
                                  &audioFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile);


assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
self.audioRecorder = audioRecorder;
Run Code Online (Sandbox Code Playgroud)

}

在此提前感谢巴拉

Bam*_*rld 8

写从AudioBuffer字节的文件在本地,我们需要从帮助AudioFileServices 链接包含在类AudioToolbox框架.

从概念上讲,我们将执行以下操作 - 设置音频文件并维护对它的引用(我们需要此引用可以从您在帖子中包含的渲染回调中访问).我们还需要跟踪每次调用回调时写入的字节数.最后一个标志来检查,让我们知道停止写入文件并关闭文件.

因为您提供的链接中的代码声明了一个AudioStreamBasicDescription,它是LPCM,因此是恒定的比特率,我们可以使用AudioFileWriteBytes函数(编写压缩音频更复杂,并使用AudioFileWritePackets函数).

让我们首先声明一个自定义结构(包含我们需要的所有额外数据),并添加这个自定义结构的实例变量,并创建一个指向结构变量的属性.我们将此添加到AudioProcessor自定义类,因为您已经可以在此行中进行类型转换的回调中访问此对象.

AudioProcessor *audioProcessor = (AudioProcessor*) inRefCon;
Run Code Online (Sandbox Code Playgroud)

将其添加到AudioProcessor.h(@interface上方)

typedef struct Recorder {
AudioFileID recordFile;
SInt64 inStartingByte;
Boolean running;
} Recorder;
Run Code Online (Sandbox Code Playgroud)

现在让我们添加一个实例变量,并将其作为指针属性并将其分配给实例变量(因此我们可以从回调函数中访问它).在@interface中添加一个名为audioRecorder的实例变量,并使ASBD可用于该类.

Recorder audioRecorder;
AudioStreamBasicDescription recordFormat;// assign this ivar to where the asbd is created in the class
Run Code Online (Sandbox Code Playgroud)

在方法- (void)initializeAudio注释掉或删除这一行,因为我们已经使recordFormat成为一个ivar.

//AudioStreamBasicDescription recordFormat;
Run Code Online (Sandbox Code Playgroud)

现在将kAudioFormatFlagIsBigEndian格式标志添加到ASBD的设置位置.

// also modify the ASBD in the AudioProcessor classes -(void)initializeAudio method (see EDIT: towards the end of this post!)
    recordFormat.mFormatFlags = kAudioFormatFlagIsBigEndian | kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked;
Run Code Online (Sandbox Code Playgroud)

最后将其添加为属性,该属性是指向audioRecorder实例变量的指针,并且不要忘记在AudioProcessor.m中合成它.我们将命名属性audioRecorderPointer命名

@property Recorder *audioRecorderPointer;

// in .m synthesise the property
@synthesize audioRecorderPointer;
Run Code Online (Sandbox Code Playgroud)

现在让我们将指针分配给ivar(这可以放在AudioProcessor类的- (void)initializeAudio方法中)

// ASSIGN POINTER PROPERTY TO IVAR
self.audioRecorderPointer = &audioRecorder;
Run Code Online (Sandbox Code Playgroud)

现在在AudioProcessor.m中,让我们添加一个方法来设置文件并打开它,以便我们可以写入它.这应该在你开始运行AUGraph之前调用.

-(void)prepareAudioFileToRecord {
// lets set up a test file in the documents directory
    NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory,NSUserDomainMask, YES);
    NSString *basePath = ([paths count] > 0) ? [paths objectAtIndex:0] : nil;
    NSString *fileName = @"test_recording.aif";
    NSString *filePath = [basePath stringByAppendingPathComponent:fileName];
    NSURL *fileURL = [NSURL fileURLWithPath:filePath];

// set up the file (bridge cast will differ if using ARC)
OSStatus audioErr = noErr;
audioErr = AudioFileCreateWithURL((CFURLRef)fileURL,
                                  kAudioFileAIFFType,
                                  recordFormat,
                                  kAudioFileFlags_EraseFile,
                                  &audioRecorder.recordFile);
assert (audioErr == noErr);// simple error checking
audioRecorder.inStartingByte = 0;
audioRecorder.running = true;
}
Run Code Online (Sandbox Code Playgroud)

好的,我们快到了.现在我们有一个要写入的文件,以及一个可以从渲染回调中访问的AudioFileID.所以在你发布的回调函数中,在方法结束时返回noErr之前添加以下内容.

// get a pointer to the recorder struct instance variable
Recorder *recInfo = audioProcessor.audioRecorderPointer;
// write the bytes
OSStatus audioErr = noErr;
if (recInfo->running) {
audioErr = AudioFileWriteBytes (recInfo->recordFile,
                                false,
                                recInfo->inStartingByte,
                                &size,
                                buffer.mData);
assert (audioErr == noErr);
// increment our byte count
recInfo->inStartingByte += (SInt64)size;// size should be number of bytes
}
Run Code Online (Sandbox Code Playgroud)

当我们想要停止录制(可能由某些用户操作调用)时,只需将运行布尔值设为false并在AudioProcessor类中的某个位置关闭这样的文件.

audioRecorder.running = false;
OSStatus audioErr = AudioFileClose(audioRecorder.recordFile);
assert (audioErr == noErr);
Run Code Online (Sandbox Code Playgroud)

编辑:样本的字节顺序需要是文件的大端,因此将kAudioFormatFlagIsBigEndian位掩码标志添加到在提供的链接中找到的源代码中的ASBD.

有关此主题的额外信息,Apple文档是一个很好的资源,我还建议阅读Chris Adamson和Kevin Avila的"学习核心音频"(我拥有一份副本).

  • @Bala看到上面的评论和更新的答案.我怀疑样本的结束与此有关.我还有音频文件类型作为核心音频文件格式(caf)我们想要音频交换文件格式(aiff),并且必须是大端.我希望这有帮助. (2认同)