Pei*_* Ma 10 speech-recognition ios swift swift3 ios10
基本上我是按照本教程学习ios语音识别模块的:https: //medium.com/ios-os-x-development/speech-recognition-with-swift-in-ios-10-50d5f4e59c48
但是当我在iphone6上测试时,我总是遇到这个错误:Error Domain = kAFAssistantErrorDomain Code = 216"(null)"
我在互联网上搜索过它,但是找到了非常罕见的信息.
这是我的代码:
//
// ViewController.swift
// speech_sample
//
// Created by Peizheng Ma on 6/22/17.
// Copyright © 2017 Peizheng Ma. All rights reserved.
//
import UIKit
import AVFoundation
import Speech
class ViewController: UIViewController, SFSpeechRecognizerDelegate {
//MARK: speech recognize variables
let audioEngine = AVAudioEngine()
let speechRecognizer: SFSpeechRecognizer? = SFSpeechRecognizer(locale: Locale.init(identifier: "en-US"))
var request = SFSpeechAudioBufferRecognitionRequest()
var recognitionTask: SFSpeechRecognitionTask?
var isRecording = false
override func viewDidLoad() {
// super.viewDidLoad()
// get Authorization
self.requestSpeechAuthorization()
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
//MARK: properties
@IBOutlet weak var detectText: UILabel!
@IBOutlet weak var startButton: UIButton!
//MARK: actions
@IBAction func startButtonTapped(_ sender: UIButton) {
if isRecording == true {
audioEngine.stop()
// if let node = audioEngine.inputNode {
// node.removeTap(onBus: 0)
// }
audioEngine.inputNode?.removeTap(onBus: 0)
// Indicate that the audio source is finished and no more audio will be appended
self.request.endAudio()
// Cancel the previous task if it's running
if let recognitionTask = recognitionTask {
recognitionTask.cancel()
self.recognitionTask = nil
}
//recognitionTask?.cancel()
//self.recognitionTask = nil
isRecording = false
startButton.backgroundColor = UIColor.gray
} else {
self.recordAndRecognizeSpeech()
isRecording = true
startButton.backgroundColor = UIColor.red
}
}
//MARK: show alert
func showAlert(title: String, message: String, handler: ((UIAlertAction) -> Swift.Void)? = nil) {
DispatchQueue.main.async { [unowned self] in
let alertController = UIAlertController(title: title, message: message, preferredStyle: .alert)
alertController.addAction(UIAlertAction(title: "OK", style: .cancel, handler: handler))
self.present(alertController, animated: true, completion: nil)
}
}
//MARK: Recognize Speech
func recordAndRecognizeSpeech() {
// Setup Audio Session
guard let node = audioEngine.inputNode else { return }
let recordingFormat = node.outputFormat(forBus: 0)
node.installTap(onBus: 0, bufferSize: 1024, format: recordingFormat) { buffer, _ in
self.request.append(buffer)
}
audioEngine.prepare()
do {
try audioEngine.start()
} catch {
self.showAlert(title: "SpeechNote", message: "There has been an audio engine error.", handler: nil)
return print(error)
}
guard let myRecognizer = SFSpeechRecognizer() else {
self.showAlert(title: "SpeechNote", message: "Speech recognition is not supported for your current locale.", handler: nil)
return
}
if !myRecognizer.isAvailable {
self.showAlert(title: "SpeechNote", message: "Speech recognition is not currently available. Check back at a later time.", handler: nil)
// Recognizer is not available right now
return
}
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { result, error in
if let result = result {
let bestString = result.bestTranscription.formattedString
self.detectText.text = bestString
// var lastString: String = ""
// for segment in result.bestTranscription.segments {
// let indexTo = bestString.index(bestString.startIndex, offsetBy: segment.substringRange.location)
// lastString = bestString.substring(from: indexTo)
// }
// self.checkForColorsSaid(resultString: lastString)
} else if let error = error {
self.showAlert(title: "SpeechNote", message: "There has been a speech recognition error.", handler: nil)
print(error)
}
})
}
//MARK: - Check Authorization Status
func requestSpeechAuthorization() {
SFSpeechRecognizer.requestAuthorization { authStatus in
OperationQueue.main.addOperation {
switch authStatus {
case .authorized:
self.startButton.isEnabled = true
case .denied:
self.startButton.isEnabled = false
self.detectText.text = "User denied access to speech recognition"
case .restricted:
self.startButton.isEnabled = false
self.detectText.text = "Speech recognition restricted on this device"
case .notDetermined:
self.startButton.isEnabled = false
self.detectText.text = "Speech recognition not yet authorized"
}
}
}
}
}
Run Code Online (Sandbox Code Playgroud)
非常感谢你.
Ray*_*ayD 17
即使在GitHub上使用示例代码,在遵循相同(优秀)教程时也遇到了同样的问题.为了解决这个问题,我必须做两件事:
首先,request.endAudio()
在代码的开头添加以停止在startButtonTapped操作中的记录.这标志着录音的结束.我看到你已经在示例代码中完成了这一点.
其次,在recordAndRecognizeSpeech函数中,当'recognitionTask'开始时,如果没有检测到语音,则'result'将为nil并且触发错误情况.所以,result != nil
在尝试分配结果之前,我进行了测试.
因此,这两个函数的代码如下所示:1.更新了startButtonTapped:
@IBAction func startButtonTapped(_ sender: UIButton) {
if isRecording {
request.endAudio() // Added line to mark end of recording
audioEngine.stop()
if let node = audioEngine.inputNode {
node.removeTap(onBus: 0)
}
recognitionTask?.cancel()
isRecording = false
startButton.backgroundColor = UIColor.gray
} else {
self.recordAndRecognizeSpeech()
isRecording = true
startButton.backgroundColor = UIColor.red
}
}
Run Code Online (Sandbox Code Playgroud)
并且2. recordAndRecognizeSpeech
从recognitionTask = ...
行内更新:
recognitionTask = speechRecognizer?.recognitionTask(with: request, resultHandler: { (result, error) in
if result != nil { // check to see if result is empty (i.e. no speech found)
if let result = result {
let bestString = result.bestTranscription.formattedString
self.detectedTextLabel.text = bestString
var lastString: String = ""
for segment in result.bestTranscription.segments {
let indexTo = bestString.index(bestString.startIndex, offsetBy: segment.substringRange.location)
lastString = bestString.substring(from: indexTo)
}
self.checkForColoursSaid(resultString: lastString)
} else if let error = error {
self.sendAlert(message: "There has been a speech recognition error")
print(error)
}
}
})
Run Code Online (Sandbox Code Playgroud)
我希望能帮助你.
这将防止两个错误:上面提到的Code = 216和“ SFSpeechAudioBufferRecognitionRequest无法重新使用”错误。
用结束停止识别,而不用取消停止
停止音频
像这样:
// stop recognition
recognitionTask?.finish()
recognitionTask = nil
// stop audio
request.endAudio()
audioEngine.stop()
audioEngine.inputNode.removeTap(onBus: 0) // Remove tap on bus when stopping recording.
Run Code Online (Sandbox Code Playgroud)
PS audioEngine.inputNode似乎不再是可选值,因此,如果让let构造,则不使用它。
归档时间: |
|
查看次数: |
5828 次 |
最近记录: |