在Swift中使用AVAudioEngine轻触麦克风输入

bro*_*ney 10 core-audio avfoundation ios swift ios8.1

我对新的AVAudioEngine感到非常兴奋.它似乎是音频单元的一个很好的API包装器.不幸的是,文档到目前为止还不存在,而且我在使用简单的图表时遇到了问题.

使用以下简单代码设置音频引擎图,永远不会调用tap块.它模仿了一些漂浮在网络上的示例代码,尽管这些代码也不起作用.

let inputNode = audioEngine.inputNode
var error: NSError?
let bus = 0

inputNode.installTapOnBus(bus, bufferSize: 2048, format: inputNode.inputFormatForBus(bus)) { 
    (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
    println("sfdljk")
}

audioEngine.prepare()
if audioEngine.startAndReturnError(&error) {
    println("started audio")
} else {
    if let engineStartError = error {
        println("error starting audio: \(engineStartError.localizedDescription)")
    }
}
Run Code Online (Sandbox Code Playgroud)

我正在寻找的是用于分析的原始pcm缓冲液.我不需要任何效果或输出.根据WWDC演讲"502 Audio Engine in Practice",这个设置应该可行.

现在,如果要从输入节点捕获数据,可以安装节点,我们已经讨论过了.

但是这个特殊例子的有趣之处在于,如果我只想使用输入节点,只需从麦克风捕获数据并进行检查,实时分析或者将其写入文件,我可以直接安装点击输入节点.

而tap将完成拉动数据输入节点的工作,将其填充到缓冲区中,然后将其返回给应用程序.

获得该数据后,您可以随心所欲地执行任何操作.

以下是我尝试的一些链接:

  1. http://hondrouthoughts.blogspot.com/2014/09/avfoundation-audio-monitoring.html
  2. http://jamiebullock.com/post/89243252529/live-coding-audio-with-swift-playgrounds(在startAndReturnError的playground中的SIGABRT)

编辑:这是基于Thorsten Karrer建议的实现.遗憾的是,它不起作用.

class AudioProcessor {
    let audioEngine = AVAudioEngine()

    init(){
        let inputNode = audioEngine.inputNode
        let bus = 0
        var error: NSError?

        inputNode.installTapOnBus(bus, bufferSize: 2048, format:inputNode.inputFormatForBus(bus)) {
            (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
                println("sfdljk")
        }

        audioEngine.prepare()
        audioEngine.startAndReturnError(nil)
        println("started audio")
    }
}
Run Code Online (Sandbox Code Playgroud)

Tho*_*rer 22

可能是您的AVAudioEngine超出范围并由ARC发布("如果您喜欢它,那么您应该保留它......").

以下代码(引擎被移动到一个ivar并因此粘住)触发了水龙头:

class AppDelegate: NSObject, NSApplicationDelegate {

    let audioEngine  = AVAudioEngine()

    func applicationDidFinishLaunching(aNotification: NSNotification) {
        let inputNode = audioEngine.inputNode
        let bus = 0
        inputNode.installTapOnBus(bus, bufferSize: 2048, format: inputNode.inputFormatForBus(bus)) {
            (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in
            println("sfdljk")
        }

        audioEngine.prepare()
        audioEngine.startAndReturnError(nil)
    }
}
Run Code Online (Sandbox Code Playgroud)

(为简洁起见,我删除了错误处理)


Jun*_*tar 11

更新:我已经实现了录制麦克风输入的完整工作示例,在运行时应用一些效果(混响,延迟,失真),并将所有这些效果保存到输出文件.

var engine = AVAudioEngine()
var distortion = AVAudioUnitDistortion()
var reverb = AVAudioUnitReverb()
var audioBuffer = AVAudioPCMBuffer()
var outputFile = AVAudioFile()
var delay = AVAudioUnitDelay()
Run Code Online (Sandbox Code Playgroud)

//初始化音频引擎

func initializeAudioEngine() {

    engine.stop()
    engine.reset()
    engine = AVAudioEngine()

    isRealTime = true
    do {
        try AVAudioSession.sharedInstance().setCategory(AVAudioSessionCategoryPlayAndRecord)

        let ioBufferDuration = 128.0 / 44100.0

        try AVAudioSession.sharedInstance().setPreferredIOBufferDuration(ioBufferDuration)

    } catch {

        assertionFailure("AVAudioSession setup error: \(error)")
    }

    let fileUrl = URLFor("/NewRecording.caf")
    print(fileUrl)
    do {

        try outputFile = AVAudioFile(forWriting:  fileUrl!, settings: engine.mainMixerNode.outputFormatForBus(0).settings)
    }
    catch {

    }

    let input = engine.inputNode!
    let format = input.inputFormatForBus(0)

    //settings for reverb
    reverb.loadFactoryPreset(.MediumChamber)
    reverb.wetDryMix = 40 //0-100 range
    engine.attachNode(reverb)

    delay.delayTime = 0.2 // 0-2 range
    engine.attachNode(delay)

    //settings for distortion
    distortion.loadFactoryPreset(.DrumsBitBrush)
    distortion.wetDryMix = 20 //0-100 range
    engine.attachNode(distortion)


    engine.connect(input, to: reverb, format: format)
    engine.connect(reverb, to: distortion, format: format)
    engine.connect(distortion, to: delay, format: format)
    engine.connect(delay, to: engine.mainMixerNode, format: format)

    assert(engine.inputNode != nil)

    isReverbOn = false

    try! engine.start()
}
Run Code Online (Sandbox Code Playgroud)

//现在录音功能:

func startRecording() {

    let mixer = engine.mainMixerNode
    let format = mixer.outputFormatForBus(0)

    mixer.installTapOnBus(0, bufferSize: 1024, format: format, block:
        { (buffer: AVAudioPCMBuffer!, time: AVAudioTime!) -> Void in

            print(NSString(string: "writing"))
            do{
                try self.outputFile.writeFromBuffer(buffer)
            }
            catch {
                print(NSString(string: "Write failed"));
            }
    })
}

func stopRecording() {

    engine.mainMixerNode.removeTapOnBus(0)
    engine.stop()
}
Run Code Online (Sandbox Code Playgroud)

我希望这对你有帮助.谢谢!

  • 这些值来自哪里?:让ioBufferDuration = 128.0/44100.0 (3认同)