我们有一个使用AVSpeechSynthesizer的iOS应用程序.它适用于iPad和其他设备 - 但我们注意到它不适用于我们的iPhone 6 Plus.
检查控制台输出时,我们看到此错误:
| AXSpeechAssetDownloader |错误| ASAssetQuery错误获取结果错误域= ASError代码= 21"操作无法完成.(ASError错误21 - 无法复制资产信息)"UserInfo = 0x174a7e100 {NSDescription =无法复制资产信息}
运行应用程序的设备确实具有网络连接.
任何想法如何开始解决这个问题?
我的AVSpeechSynthesizer代码不能在设备上运行(iOS 10),但它可以在iOS 9.x上运行,现在它可以在模拟器中运行.
let str = self.audioOutput //just some string here, this string exists, and it's in english
let synth = AVSpeechSynthesizer()
let utterance = AVSpeechUtterance(string: str)
utterance.rate = AVSpeechUtteranceDefaultSpeechRate
let lang = "en-US"
utterance.voice = AVSpeechSynthesisVoice(language: lang)
synth.speakUtterance(utterance)
Run Code Online (Sandbox Code Playgroud)
我收到这个错误:
MobileAssetError:1] Unable to copy asset attributes
Could not get attribute 'LocalURL': Error Domain=MobileAssetError Code=1 "Unable to copy asset attributes"
UserInfo={NSDescription=Unable to copy asset attributes}
0x1741495e0 Copy assets attributes reply: XPC_TYPE_DICTIONARY <dictionary: 0x1741495e0> { count = 1, transaction: 0, voucher = …Run Code Online (Sandbox Code Playgroud) 我为我的应用程序的 text2speech 函数编写了一个小函数。该代码在 ios16 之前都可以正常工作。现在我看到以下控制台日志:
022-09-13 18:04:02.274692+0200 Blindzeln_Prototyp[47358:164517] [asset] Failed to get sandbox extensions
2022-09-13 18:04:02.314956+0200 Blindzeln_Prototyp[47358:164517] [catalog] Query for com.apple.MobileAsset.VoiceServicesVocalizerVoice failed: 2
2022-09-13 18:04:02.315688+0200 Blindzeln_Prototyp[47358:164517] [catalog] Unable to list voice folder
2022-09-13 18:04:02.333665+0200 Blindzeln_Prototyp[47358:164517] [catalog] Query for com.apple.MobileAsset.VoiceServices.GryphonVoice failed: 2
2022-09-13 18:04:02.334239+0200 Blindzeln_Prototyp[47358:164517] [catalog] Unable to list voice folder
2022-09-13 18:04:02.338622+0200 Blindzeln_Prototyp[47358:164517] [catalog] Unable to list voice folder
2022-09-13 18:04:02.355732+0200 Blindzeln_Prototyp[47358:164583] [AXTTSCommon] File did not exist at content path: (null) (null). Attempting to fallback to default voice for language: …Run Code Online (Sandbox Code Playgroud) 我在我的应用程序中使用AVSpeechSynthesizer,我希望将语音文本保存到音频文件或AVAsset.我浏览了Apple的文档并没有看到任何内容,但我发现了一个问题以确定.以下是我目前的代码.
AVSpeechUtterance * utterance = [AVSpeechUtterance speechUtteranceWithString:textView.text];
float rate = [speedSlider value]/1.5;
utterance.rate = rate;
[speechSynthesizer speakUtterance:utterance];
Run Code Online (Sandbox Code Playgroud) 是否可以在同一重音中使用另一个人的声音?
例如,当我进行AVSpeechSynthesisVoice(语言:"en-US")时,它会自动使用女性的声音,但我想要一个男人,而是带有相同的重音.
是否可以在相同的重音中改变声音,或者唯一的方法是尝试另一种重音的声音(例如:us-AU)
(注意我想改变整个声音,而不是它的属性,如音高,速度等)
谢谢
所以我现在正在尝试的是当应用程序在后台接收远程通知(或者可能从暂停状态唤醒)时播放消息.
应用程序从暂停模式唤醒后,声音根本无法播放.
当应用程序在前台时,在didReceiveRemoteNotification:调用方法后立即播放声音.
didReceiveRemoteNotification:当应用程序从挂起模式唤醒时调用方法时,立即播放声音的适当方法是什么?
这是一些代码(语音管理器类):
-(void)textToSpeechWithMessage:(NSString*)message andLanguageCode:(NSString*)languageCode{
AVAudioSession *audioSession = [AVAudioSession sharedInstance];
NSError *error = nil;
DLog(@"Activating audio session");
if (![audioSession setCategory:AVAudioSessionCategoryPlayAndRecord withOptions:AVAudioSessionCategoryOptionDefaultToSpeaker | AVAudioSessionCategoryOptionMixWithOthers error:&error]) {
DLog(@"Unable to set audio session category: %@", error);
}
BOOL result = [audioSession setActive:YES error:&error];
if (!result) {
DLog(@"Error activating audio session: %@", error);
}else{
AVSpeechUtterance *utterance = [AVSpeechUtterance speechUtteranceWithString:message];
[utterance setRate:0.5f];
[utterance setVolume:0.8f];
utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:languageCode];
[self.synthesizer speakUtterance:utterance];
}
Run Code Online (Sandbox Code Playgroud)
}
-(void)textToSpeechWithMessage:(NSString*)message{
[self textToSpeechWithMessage:message andLanguageCode:[[NSLocale preferredLanguages] objectAtIndex:0]];
}
Run Code Online (Sandbox Code Playgroud)
后来在 …
objective-c ios avspeechsynthesizer background-mode avspeechutterance
解决方案:它没有US male voice
我已经习惯AVSpeechSynthesizer framework了iOS7.0
AVSpeechUtterance *utt = [AVSpeechUtterance speechUtteranceWithString:@"Hello"];
if (isMale) //flag for male or female voice selected
{
// need US male voice as en-US is providing only US female voice
utt.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-GB"]; //UK male voice
}
else
{
utt.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-US"]; //US female voice
}
Run Code Online (Sandbox Code Playgroud)
我需要用US male voice而不是male UK voice.
我想使用iOS 7新的语音合成API,我的应用程序本地化为法语和英语,德语,日语等.我想要设置语言代码来阅读文本.如何获取语言代码?
utterance.voice = [AVSpeechSynthesisVoice voiceWithLanguage:@"en-ZA"];
Run Code Online (Sandbox Code Playgroud) 我在iOS 7中有一个使用语音合成的代码,一切顺利.为了定义我的AVSpeechUtterance速度,我使用了以下公式
float speakSpeedRate = (AVSpeechUtteranceMinimumSpeechRate + AVSpeechUtteranceDefaultSpeechRate)*0.5;
Run Code Online (Sandbox Code Playgroud)
但似乎,至少在我的iPhone 5S上,在iOS 8下,AVSpeechUtteranceDefaultSpeechRate比在iOS 7上快得多.
有谁经历过这个?
编辑:我通过Apple开发论坛,似乎其他人遇到了这个bug,但它可能依赖于其他参数,如语言...
有没有办法获得 Siri 的声音NSSpeechSynthesizer?NSSpeechSynthesizer.availableVoices()没有列出它们,但也许有一个未记录的技巧或什么?
我也尝试过使用AVSpeech?Synthesizer,即使它应该可以在 macOS 10.14+ 上使用,但我无法大声朗读……
我已经使用 Playground 使用来自NSHipster的以下代码对此进行了测试:
import Cocoa
import AVFoundation
let string = "Hello, World!"
let utterance = AVSpeechUtterance(string: string)
let synthesizer = AVSpeechSynthesizer()
synthesizer.speak(utterance)
Run Code Online (Sandbox Code Playgroud) macos avfoundation avspeechsynthesizer swift nsspeechsynthesizer