sam*_*sam 1 python speech-to-text google-speech-api google-speech-to-text-api hint-phrases
我正在使用 python3 通过提供的 python 包(google-speech)使用 Google 语音转文本转录音频文件。
有一个选项可以定义用于转录的自定义短语,如文档中所述: https: //cloud.google.com/speech-to-text/docs/speech-adaptation
出于测试目的,我使用一个包含文本的小音频文件:
[..] 在本次讲座中,我们将讨论 Burrows Wheeler 变换和 FM 索引 [..]
我将给出以下短语来查看效果,例如,如果我希望使用正确的符号来识别特定名称。在此示例中,我想将burrows更改为barrows:
config = speech.RecognitionConfig(dict(
encoding=speech.RecognitionConfig.AudioEncoding.ENCODING_UNSPECIFIED,
sample_rate_hertz=24000,
language_code="en-US",
enable_word_time_offsets=True,
speech_contexts=[
speech.SpeechContext(dict(
phrases=["barrows", "barrows wheeler", "barrows wheeler transform"]
))
]
))
Run Code Online (Sandbox Code Playgroud)
不幸的是,这似乎没有任何效果,因为输出仍然与没有上下文短语时相同。
我是否使用了错误的短语,或者它有如此高的信心,以至于它听到的单词确实是洞穴,所以它会忽略我的短语?
PS:我还尝试使用speech_v1p1beta1.AdaptationClientandspeech_v1p1beta1.SpeechAdaptation而不是将短语放入配置中,但这只会给我一个内部服务器错误,而不会提供有关出现问题的其他信息。https://cloud.google.com/speech-to-text/docs/adaptation
我创建了一个音频文件来重新创建您的场景,并且我能够使用模型适应来提高识别能力。为了通过此功能实现此目的,我建议查看此示例和这篇文章,以更好地理解适应模型。
现在,为了提高对您的短语的识别,我执行了以下操作:
在本次讲座中,我们将讨论 Burrows Wheeler 变换和 FM 索引
PhraseSet和CustomClass,其中包含您想要改进的单词,在本例中为单词“barrows”。您还可以使用Speech-To-Text GUI创建/更新/删除短语集和自定义类。下面是我用于改进的代码。from os import pathconf_names
from google.cloud import speech_v1p1beta1 as speech
import argparse
def transcribe_with_model_adaptation(
project_id="[PROJECT-ID]", location="global", speech_file=None, custom_class_id="[CUSTOM-CLASS-ID]", phrase_set_id="[PHRASE-SET-ID]"
):
"""
Create`PhraseSet` and `CustomClasses` to create custom lists of similar
items that are likely to occur in your input data.
"""
import io
# Create the adaptation client
adaptation_client = speech.AdaptationClient()
# The parent resource where the custom class and phrase set will be created.
parent = f"projects/{project_id}/locations/{location}"
# Create the custom class resource
adaptation_client.create_custom_class(
{
"parent": parent,
"custom_class_id": custom_class_id,
"custom_class": {
"items": [
{"value": "barrows"}
]
},
}
)
custom_class_name = (
f"projects/{project_id}/locations/{location}/customClasses/{custom_class_id}"
)
# Create the phrase set resource
phrase_set_response = adaptation_client.create_phrase_set(
{
"parent": parent,
"phrase_set_id": phrase_set_id,
"phrase_set": {
"boost": 0,
"phrases": [
{"value": f"${{{custom_class_name}}}", "boost": 10},
{"value": f"talk about the ${{{custom_class_name}}} wheeler transform", "boost": 15}
],
},
}
)
phrase_set_name = phrase_set_response.name
# print(u"Phrase set name: {}".format(phrase_set_name))
# The next section shows how to use the newly created custom
# class and phrase set to send a transcription request with speech adaptation
# Speech adaptation configuration
speech_adaptation = speech.SpeechAdaptation(
phrase_set_references=[phrase_set_name])
# speech configuration object
config = speech.RecognitionConfig(
encoding=speech.RecognitionConfig.AudioEncoding.FLAC,
sample_rate_hertz=24000,
language_code="en-US",
adaptation=speech_adaptation,
enable_word_time_offsets=True,
model="phone_call",
use_enhanced=True
)
# The name of the audio file to transcribe
# storage_uri URI for audio file in Cloud Storage, e.g. gs://[BUCKET]/[FILE]
with io.open(speech_file, "rb") as audio_file:
content = audio_file.read()
audio = speech.RecognitionAudio(content=content)
# audio = speech.RecognitionAudio(uri="gs://biasing-resources-test-audio/call_me_fionity_and_ionity.wav")
# Create the speech client
speech_client = speech.SpeechClient()
response = speech_client.recognize(config=config, audio=audio)
for result in response.results:
# The first alternative is the most likely one for this portion.
print(u"Transcript: {}".format(result.alternatives[0].transcript))
# [END speech_transcribe_with_model_adaptation]
if __name__ == "__main__":
parser = argparse.ArgumentParser(
description=__doc__, formatter_class=argparse.RawDescriptionHelpFormatter
)
parser.add_argument("path", help="Path for audio file to be recognized")
args = parser.parse_args()
transcribe_with_model_adaptation(speech_file=args.path)
Run Code Online (Sandbox Code Playgroud)
element already exists如果尝试重新创建自定义类和短语集,可能会抛出错误并显示消息。(python_speech2text) user@penguin:~/replication/python_speech2text$ python speech_model_adaptation_beta.py audio.flac
Transcript: in this lecture will talk about the Burrows wheeler transform and the FM index
Run Code Online (Sandbox Code Playgroud)
(python_speech2text) user@penguin:~/replication/python_speech2text$ python speech_model_adaptation_beta.py audio.flac
Transcript: in this lecture will talk about the barrows wheeler transform and the FM index
Run Code Online (Sandbox Code Playgroud)
最后,我想添加一些关于改进和我执行的代码的注释:
我使用了flac音频文件,因为建议使用它以获得最佳效果。
我已经使用了,model="phone_call"因为use_enhanced=True这是 Cloud Speech-To-Text 使用我自己的音频文件识别的模型。此外,增强的模型可以提供更好的结果,您可以查看文档以获取更多详细信息。请注意,此配置可能与您的音频文件不同。
考虑启用Google数据记录功能,以从您的音频转录请求中收集数据。然后,谷歌使用这些数据来改进用于识别语音音频的机器学习模型。
创建自定义类和短语集后,您可以使用 Speech-to-Text UI 来快速更新并执行测试。仅包含
我在短语集中使用了参数 boost,当您使用 boost 时,您会为 PhraseSet 资源中的短语项分配一个加权值。在为音频数据中的单词选择可能的转录时,语音转文本指的是该加权值。该值越高,Speech-to-Text 从可能的替代项中选择该单词或短语的可能性就越高。
我希望这些信息可以帮助您提高您的认知度。
| 归档时间: |
|
| 查看次数: |
1815 次 |
| 最近记录: |