将AudioKit麦克风管道连接到Google语音转文本

bar*_*bus 3 ios google-cloud-platform google-speech-api audiokit

我正在尝试AudioKit将麦克风通过管道传输到Google的语音文本API,如此处所示但我不确定如何解决。

要为语音转文本引擎准备音频,您需要设置编码并将其作为块传递。在Google使用的示例中,他们使用了Apple的示例AVFoundation,但是我想使用AudioKit,这样我就可以进行一些预处理,例如削减低振幅等。

我相信执行此操作的正确方法是使用Tap

首先,我应该通过以下方式匹配格式:

var asbd = AudioStreamBasicDescription()
asbd.mSampleRate = 16000.0
asbd.mFormatID = kAudioFormatLinearPCM
asbd.mFormatFlags = kAudioFormatFlagIsSignedInteger | kAudioFormatFlagIsPacked
asbd.mBytesPerPacket = 2
asbd.mFramesPerPacket = 1
asbd.mBytesPerFrame = 2
asbd.mChannelsPerFrame = 1
asbd.mBitsPerChannel = 16

AudioKit.format = AVAudioFormat(streamDescription: &asbd)!
Run Code Online (Sandbox Code Playgroud)

然后创建一个水龙头,例如:

open class TestTap {
    internal let bufferSize: UInt32 = 1_024

    @objc public init(_ input: AKNode?) {
        input?.avAudioNode.installTap(onBus: 0, bufferSize: bufferSize, format: AudioKit.format) { buffer, _ in

         // do work here

        }
    }
}
Run Code Online (Sandbox Code Playgroud)

但是我无法确定通过该方法streamAudioData实时处理通过该方法发送到Google Speech-to-Text API的数据的正确方法,AudioKit但也许我会采用错误的方法?

更新:

我已经这样创建了Tap

open class TestTap {

    internal var audioData =  NSMutableData()
    internal let bufferSize: UInt32 = 1_024

    func toData(buffer: AVAudioPCMBuffer) -> NSData {
        let channelCount = 2  // given PCMBuffer channel count is
        let channels = UnsafeBufferPointer(start: buffer.floatChannelData, count: channelCount)
        return NSData(bytes: channels[0], length:Int(buffer.frameCapacity * buffer.format.streamDescription.pointee.mBytesPerFrame))
    }

    @objc public init(_ input: AKNode?) {

        input?.avAudioNode.installTap(onBus: 0, bufferSize: bufferSize, format: AudioKit.format) { buffer, _ in
            self.audioData.append(self.toData(buffer: buffer) as Data)

            // We recommend sending samples in 100ms chunks (from Google)
            let chunkSize: Int /* bytes/chunk */ = Int(0.1 /* seconds/chunk */
                * AudioKit.format.sampleRate /* samples/second */
                * 2 /* bytes/sample */ )

            if self.audioData.length > chunkSize {
                SpeechRecognitionService
                    .sharedInstance
                    .streamAudioData(self.audioData,
                                     completion: { response, error in
                                        if let error = error {
                                            print("ERROR: \(error.localizedDescription)")
                                            SpeechRecognitionService.sharedInstance.stopStreaming()
                                        } else if let response = response {
                                            print(response)
                                        }
                    })
                self.audioData = NSMutableData()
            }

        }
    }
}
Run Code Online (Sandbox Code Playgroud)

在中viewDidLoad:,我将AudioKit设置为:

AKSettings.sampleRate = 16_000
AKSettings.bufferLength = .shortest
Run Code Online (Sandbox Code Playgroud)

但是,Google抱怨:

ERROR: Audio data is being streamed too fast. Please stream audio data approximately at real time.

我尝试将多个参数(例如块大小)更改为无济于事。

bar*_*bus 6

我在这里找到了解决方案。

我的最终代码Tap是:

open class GoogleSpeechToTextStreamingTap {

internal var converter: AVAudioConverter!

@objc public init(_ input: AKNode?, sampleRate: Double = 16000.0) {

    let format = AVAudioFormat(commonFormat: AVAudioCommonFormat.pcmFormatInt16, sampleRate: sampleRate, channels: 1, interleaved: false)!

    self.converter = AVAudioConverter(from: AudioKit.format, to: format)
    self.converter?.sampleRateConverterAlgorithm = AVSampleRateConverterAlgorithm_Normal
    self.converter?.sampleRateConverterQuality = .max

    let sampleRateRatio = AKSettings.sampleRate / sampleRate
    let inputBufferSize = 4410 //  100ms of 44.1K = 4410 samples.

    input?.avAudioNode.installTap(onBus: 0, bufferSize: AVAudioFrameCount(inputBufferSize), format: nil) { buffer, time in

        let capacity = Int(Double(buffer.frameCapacity) / sampleRateRatio)
        let bufferPCM16 = AVAudioPCMBuffer(pcmFormat: format, frameCapacity: AVAudioFrameCount(capacity))!

        var error: NSError? = nil
        self.converter?.convert(to: bufferPCM16, error: &error) { inNumPackets, outStatus in
            outStatus.pointee = AVAudioConverterInputStatus.haveData
            return buffer
        }

        let channel = UnsafeBufferPointer(start: bufferPCM16.int16ChannelData!, count: 1)
        let data = Data(bytes: channel[0], count: capacity * 2)

        SpeechRecognitionService
            .sharedInstance
            .streamAudioData(data,
                             completion: { response, error in
                                if let error = error {
                                    print("ERROR: \(error.localizedDescription)")
                                    SpeechRecognitionService.sharedInstance.stopStreaming()
                                } else if let response = response {
                                    print(response)
                                }
            })
    }
}
Run Code Online (Sandbox Code Playgroud)