The*_*ter 4 iphone avfoundation ios avaudiosession avspeechsynthesizer
当使用文字转语音时,我希望背景音频变暗(或"躲避"),说出话语,然后解开背景音频.它主要起作用,但是当试图解开时,它会保持躲避而没有在停用时抛出错误.
说话的方法:
// Create speech utterance
AVSpeechUtterance *speechUtterance = [[AVSpeechUtterance alloc]initWithString:textToSpeak];
speechUtterance.rate = instance.speechRate;
speechUtterance.pitchMultiplier = instance.speechPitch;
speechUtterance.volume = instance.speechVolume;
speechUtterance.postUtteranceDelay = 0.005;
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:instance.voiceLanguageCode];
speechUtterance.voice = voice;
if (instance.speechSynthesizer.isSpeaking) {
[instance.speechSynthesizer stopSpeakingAtBoundary:AVSpeechBoundaryImmediate];
}
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *activationError = nil;
[audioSession setActive:YES error:&activationError];
if (activationError) {
NSLog(@"Error activating: %@", activationError);
}
[instance.speechSynthesizer speakUtterance:speechUtterance];
Run Code Online (Sandbox Code Playgroud)
然后在speechUtterance完成讲话后停用它:
- (void)speechSynthesizer:(AVSpeechSynthesizer *)synthesizer didFinishSpeechUtterance:(AVSpeechUtterance *)utterance
{
dispatch_queue_t myQueue = dispatch_queue_create("com.company.appname", nil);
dispatch_async(myQueue, ^{
NSError *error = nil;
if (![[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error]) {
NSLog(@"Error deactivating: %@", error);
}
});
}
Run Code Online (Sandbox Code Playgroud)
在App Delegate中设置应用程序的音频类别:
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions
{
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *setCategoryError = nil;
[audioSession setCategory:AVAudioSessionCategoryPlayback
withOptions:AVAudioSessionCategoryOptionDuckOthers error:&setCategoryError];
}
Run Code Online (Sandbox Code Playgroud)
当我在延迟后停用时,避免/脱落工作:AVAudioSession
dispatch_time_t popTime = dispatch_time(DISPATCH_TIME_NOW, 0.2 * NSEC_PER_SEC);
dispatch_after(popTime, dispatch_queue_create("com.company.appname", nil), ^(void){
NSError *error = nil;
if (![[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error]) {
NSLog(@"Error deactivating: %@", error);
}
});
Run Code Online (Sandbox Code Playgroud)
但是,延迟是明显的,我在控制台中出错:
[avas] AVAudioSession.mm:1074:-[AVAudioSession setActive:withOptions:error:]: Deactivating an audio session that has running I/O. All I/O should be stopped or paused prior to deactivating the audio session.
Run Code Online (Sandbox Code Playgroud)
如何AVSpeechSynthesizer正确地结合背景音频?
编辑:显然,问题使用茎postUtteranceDelay上AVSpeechUtterance,使音乐不断被变暗.删除该属性可以解决问题.但是,我需要postUtteranceDelay一些言语,所以我更新了标题.
在收听Spotify时,使用您的代码没有任何问题/错误工作(启动和停止).我在iOS 9.1上使用了iPhone 6S,因此这可能是iOS 10的问题.
我建议完全删除调度包,因为它不应该是必要的.这可能会为您解决问题.
下面是工作代码示例,我所做的只是创建一个新项目("单视图应用程序")并将我的AppDelegate.m更改为如下所示:
#import "AppDelegate.h"
@import AVFoundation;
@interface AppDelegate () <AVSpeechSynthesizerDelegate>
@property (nonatomic, strong) AVSpeechSynthesizer *speechSynthesizer;
@end
@implementation AppDelegate
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *setCategoryError = nil;
[audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionDuckOthers error:&setCategoryError];
if (setCategoryError) {
NSLog(@"error setting up: %@", setCategoryError);
}
self.speechSynthesizer = [[AVSpeechSynthesizer alloc] init];
self.speechSynthesizer.delegate = self;
AVSpeechUtterance *speechUtterance = [[AVSpeechUtterance alloc] initWithString:@"Hi there, how are you doing today?"];
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"];
speechUtterance.voice = voice;
NSError *activationError = nil;
[audioSession setActive:YES error:&activationError];
if (activationError) {
NSLog(@"Error activating: %@", activationError);
}
[self.speechSynthesizer speakUtterance:speechUtterance];
return YES;
}
- (void)speechSynthesizer:(AVSpeechSynthesizer *)synthesizer didFinishSpeechUtterance:(AVSpeechUtterance *)utterance {
NSError *error = nil;
if (![[AVAudioSession sharedInstance] setActive:NO withOptions:AVAudioSessionSetActiveOptionNotifyOthersOnDeactivation error:&error]) {
NSLog(@"Error deactivating: %@", error);
}
}
@end
Run Code Online (Sandbox Code Playgroud)
在物理设备上运行时,控制台的唯一输出是:
2016-12-21 09:42:08.484 DimOtherAudio[19017:3751445] Building MacinTalk voice for asset: (null)
UPDATE
设置postUtteranceDelay属性为我创建了同样的问题.
这个postUtteranceDelay州的文件:
语音合成器在处理下一个排队的话语之前说出话语之后将等待的时间量.
当AVSpeechSynthesizer的一个实例说出两个或多个话语时,任何一个可听见的时段之间的时间至少是第一个话语的postUtteranceDelay和第二个话语的preUtteranceDelay之和.
从文档中可以清楚地看出,该值仅用于在添加另一个话语时使用.我确认添加第二个没有设置的话语会postUtteranceDelay丢弃音频.
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *setCategoryError = nil;
[audioSession setCategory:AVAudioSessionCategoryPlayback withOptions:AVAudioSessionCategoryOptionDuckOthers error:&setCategoryError];
if (setCategoryError) {
NSLog(@"error setting up: %@", setCategoryError);
}
self.speechSynthesizer = [[AVSpeechSynthesizer alloc] init];
self.speechSynthesizer.delegate = self;
AVSpeechUtterance *speechUtterance = [[AVSpeechUtterance alloc] initWithString:@"Hi there, how are you doing today?"];
speechUtterance.postUtteranceDelay = 0.005;
AVSpeechSynthesisVoice *voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"];
speechUtterance.voice = voice;
NSError *activationError = nil;
[audioSession setActive:YES error:&activationError];
if (activationError) {
NSLog(@"Error activating: %@", activationError);
}
[self.speechSynthesizer speakUtterance:speechUtterance];
// second utterance without postUtteranceDelay
AVSpeechUtterance *speechUtterance2 = [[AVSpeechUtterance alloc] initWithString:@"Duck. Duck. Goose."];
[self.speechSynthesizer speakUtterance:speechUtterance2];
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
1104 次 |
| 最近记录: |