Whi*_*ind 16 objective-c ios avspeechsynthesizer background-mode avspeechutterance
所以我现在正在尝试的是当应用程序在后台接收远程通知(或者可能从暂停状态唤醒)时播放消息.
应用程序从暂停模式唤醒后,声音根本无法播放.
当应用程序在前台时,在didReceiveRemoteNotification:调用方法后立即播放声音.
didReceiveRemoteNotification:当应用程序从挂起模式唤醒时调用方法时,立即播放声音的适当方法是什么?
这是一些代码(语音管理器类):
-(void)textToSpeechWithMessage:(NSString*)message andLanguageCode:(NSString*)languageCode{
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *error = nil;
DLog(@"Activating audio session");
if (![audioSession setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionMixWithOthers error:&error]) {
DLog(@"Unable to set audio session category: %@", error);
}
BOOL result = [audioSession setActive:YES error:&error];
if (!result) {
DLog(@"Error activating audio session: %@", error);
}else{
AVSpeechUtterance *utterance = [AVSpeechUtterance speechUtteranceWithString:message];
[utterance setRate:0.5f];
[utterance setVolume:0.8f];
utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:languageCode];
[self.synthesizer speakUtterance:utterance];
}
Run Code Online (Sandbox Code Playgroud)
}
-(void)textToSpeechWithMessage:(NSString*)message{
[self textToSpeechWithMessage:message andLanguageCode:[[NSLocale preferredLanguages] objectAtIndex:0]];
}
Run Code Online (Sandbox Code Playgroud)
后来在AppDelegate:
[[MCSpeechManager sharedInstance] textToSpeechWithMessage:messageText];
Run Code Online (Sandbox Code Playgroud)
我在Capabilities-> Background Modes部分启用了Audio,AirPlay和Picture in Picture选项.
编辑:
也许我应该启动后台任务并在需要时运行到期处理程序?我想这可行,但我也希望听到解决这种情况的常用方法.
使用此代码,当我在后台收到通知时,我会收到下一个错误:
激活音频会话时出错:错误域= NSOSStatusErrorDomain代码= 561015905"(null)"
代码561015905适用于:
AVAudioSessionErrorCodeCannotStartPlaying ='!pla',/*0x21706C61,561015905
它被描述为:
如果应用程序的信息属性列表不允许音频使用,或者应用程序位于后台并使用不允许背景音频的类别,则会出现此错误类型.
但我得到与其他类别(AVAudioSessionCategoryAmbient和AVAudioSessionCategorySoloAmbient)相同的错误
由于我无法重现您所描述的错误,请允许我提供一些指示和一些代码.
didReceiveRemoteNotification必须响应来自所述通知的用户操作,例如点击通知消息.如果以上所有陈述都是真的,那么现在的答案将集中在通知到达时会发生什么.
在第4步,textToSpeechWithMessage按预期工作:
func application(_ application: UIApplication,
didReceiveRemoteNotification
userInfo: [AnyHashable : Any],
fetchCompletionHandler completionHandler:
@escaping (UIBackgroundFetchResult) -> Void) {
textToSpeechWithMessage(message: "Speak up", "en-US")
}
Run Code Online (Sandbox Code Playgroud)
为简单起见,我使用OneSignal来连接通知:
import OneSignal
...
_ = OneSignal.init(launchOptions: launchOptions,
appId: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx")
// or
_ = OneSignal.init(launchOptions: launchOptions,
appId: "xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx")
{
(s:String?, t:[AnyHashable : Any]?, u:Bool) in
self.textToSpeechWithMessage(message: "OneDignal", "en-US")
}
Run Code Online (Sandbox Code Playgroud)
textToSpeechWithMessage大多数情况下都未被触及,完全符合Swift 3:
import AVFoundation
...
let synthesizer = AVSpeechSynthesizer()
func textToSpeechWithMessage(message:String, _ languageCode:String)
{
let audioSession = AVAudioSession.sharedInstance()
print("Activating audio session")
do {
try audioSession.setCategory(AVAudioSessionCategoryPlayAndRecord,
with: [AVAudioSessionCategoryOptions.defaultToSpeaker,
AVAudioSessionCategoryOptions.mixWithOthers]
)
try audioSession.setActive(true)
let utterance = AVSpeechUtterance(string:message)
utterance.rate = 0.5
utterance.volume = 0.8
utterance.voice = AVSpeechSynthesisVoice(language: languageCode)
self.synthesizer.speak(utterance)
} catch {
print("Unable to set audio session category: %@", error);
}
}
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1161 次 |
| 最近记录: |