Jon*_*ist 3 python azure speech-to-text
在我在 GitHub 上找到的示例的帮助下,我的代码目前能够读取音频文件并使用 Azure Speech to Text 进行转录。但是,我需要在转录中包含所有单词的时间戳。根据文档,此功能是在 1.5.0 版中添加的,可通过方法 request_word_level_timestamps() 访问。但即使我打电话给它,我也得到和以前一样的回应。我无法从文档中弄清楚如何使用它。有谁知道它是如何工作的?
我使用的是 Python SDK 1.5.1 版。
import azure.cognitiveservices.speech as speechsdk
import time
from allennlp.predictors.predictor import Predictor
import json
inputPath = "(inputlocation)"
outputPath = "(outputlocation)"
# Creates an instance of a speech config with specified subscription key and service region.
# Replace with your own subscription key and service region (e.g., "westus").
speech_key, service_region = "apikey", "region"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
speech_config.request_word_level_timestamps()
speech_config.output_format=speechsdk.OutputFormat.Detailed
#print("VALUE: " + speech_config.get_property(property_id=speechsdk.PropertyId.SpeechServic eResponse_RequestWordLevelTimestamps))
filename = input("Enter filename: ")
print(speech_config)
try:
audio_config = speechsdk.audio.AudioConfig(filename= inputPath + filename)
# Creates a recognizer with the given settings
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
def start():
done = False
#output = ""
fileOpened = open(outputPath+ filename[0: len(filename) - 4] + "_MS_recognized.txt", "w+")
fileOpened.truncate(0)
fileOpened.close()
def stop_callback(evt):
print("Closing on {}".format(evt))
speech_recognizer.stop_continuous_recognition()
nonlocal done
done = True
def add_to_res(evt):
#nonlocal output
#print("Recognized: {}".format(evt.result.text))
#output = output + evt.result.text + "\n"
fileOpened = open( outputPath + filename[0: len(filename) - 4] + "_MS_recognized.txt", "a")
fileOpened.write(evt.result.text + "\n")
fileOpened.close()
#print(output)
# Connect callbacks to the events fired by the speech recognizer
speech_recognizer.recognizing.connect(lambda evt: print('RECOGNIZING: {}'.format(evt)))
speech_recognizer.recognized.connect(lambda evt: print('RECOGNIZED: {}'.format(evt)))
speech_recognizer.recognized.connect(add_to_res)
speech_recognizer.session_started.connect(lambda evt: print('SESSION STARTED: {}'.format(evt)))
speech_recognizer.session_stopped.connect(lambda evt: print('SESSION STOPPED {}'.format(evt)))
speech_recognizer.canceled.connect(lambda evt: print('CANCELED {}'.format(evt)))
# stop continuous recognition on either session stopped or canceled events
speech_recognizer.session_stopped.connect(stop_callback)
speech_recognizer.canceled.connect(stop_callback)
# Start continuous speech recognition
speech_recognizer.start_continuous_recognition()
while not done:
time.sleep(.5)
# </SpeechContinuousRecognitionWithFile>
# Starts speech recognition, and returns after a single utterance is recognized. The end of a
# single utterance is determined by listening for silence at the end or until a maximum of 15
# seconds of audio is processed. The task returns the recognition text as result.
# Note: Since recognize_once() returns only a single utterance, it is suitable only for single
# shot recognition like command or query.
# For long-running multi-utterance recognition, use start_continuous_recognition() instead.
start()
except Exception as e:
print("File does not exist")
#print(e)
Run Code Online (Sandbox Code Playgroud)
结果只包含 session_id 和一个包含 result_id、text 和 reason 的结果对象。
小智 7
根据关于它将如何帮助连续识别的评论,如果您设置了 with SpeechConfig,request_word_level_timestamps()则可以将其作为连续识别来运行。您可以使用 检查 json 结果evt.result.json。
例如,
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
speech_config.request_word_level_timestamps()
Run Code Online (Sandbox Code Playgroud)
然后你的语音识别器:
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
Run Code Online (Sandbox Code Playgroud)
当您将回调连接到由 voice_recognizer 触发的事件时,您可以通过以下方式查看单词级时间戳:
speech_recognizer.recognized.connect(lambda evt: print('JSON: {}'.format(evt.result.json)))
Run Code Online (Sandbox Code Playgroud)
我的问题是 Translation 对象不包含单词级别,因为它不接受speech_config.
我参考了您的代码并按照官方教程Quickstart: Recognize speech with the Speech SDK for Python编写了下面的示例代码,它可以打印每个单词的Offset和Duration值。我使用了一个名为whatstheweatherlike.wav来自samples/csharp/sharedcontent/console/whatstheweatherlike.wavGitHub Repo的音频文件Azure-Samples/cognitive-services-speech-sdk。
这是我的示例代码及其结果。
import azure.cognitiveservices.speech as speechsdk
speech_key, service_region = "<your api key>", "<your region>"
speech_config = speechsdk.SpeechConfig(subscription=speech_key, region=service_region)
speech_config.request_word_level_timestamps()
audio_config = speechsdk.audio.AudioConfig(filename='whatstheweatherlike.wav')
speech_recognizer = speechsdk.SpeechRecognizer(speech_config=speech_config, audio_config=audio_config)
result = speech_recognizer.recognize_once()
# print(result.json)
# If without `request_word_level_timestamps`, the result:
# {"DisplayText":"What's the weather like?","Duration":13400000,"Offset":400000,"RecognitionStatus":"Success"}
# Enable `request_word_level_timestamps`, the result includes word level timestamps.
# {"Duration":13400000,"NBest":[{"Confidence":0.9761951565742493,"Display":"What's the weather like?","ITN":"What's the weather like","Lexical":"what's the weather like","MaskedITN":"What's the weather like","Words":[{"Duration":3800000,"Offset":600000,"Word":"what's"},{"Duration":1200000,"Offset":4500000,"Word":"the"},{"Duration":2900000,"Offset":5800000,"Word":"weather"},{"Duration":4700000,"Offset":8800000,"Word":"like"}]},{"Confidence":0.9245584011077881,"Display":"what is the weather like","ITN":"what is the weather like","Lexical":"what is the weather like","MaskedITN":"what is the weather like","Words":[{"Duration":2900000,"Offset":600000,"Word":"what"},{"Duration":700000,"Offset":3600000,"Word":"is"},{"Duration":1300000,"Offset":4400000,"Word":"the"},{"Duration":2900000,"Offset":5800000,"Word":"weather"},{"Duration":4700000,"Offset":8800000,"Word":"like"}]}],"Offset":400000,"RecognitionStatus":"Success"}
import json
stt = json.loads(result.json)
confidences_in_nbest = [item['Confidence'] for item in stt['NBest']]
best_index = confidences_in_nbest.index(max(confidences_in_nbest))
words = stt['NBest'][best_index]['Words']
print(words)
print(f"Word\tOffset\tDuration")
for word in words:
print(f"{word['Word']}\t{word['Offset']}\t{word['Duration']}")
Run Code Online (Sandbox Code Playgroud)
上面脚本的输出是:
[{'Duration': 3800000, 'Offset': 600000, 'Word': "what's"}, {'Duration': 1200000, 'Offset': 4500000, 'Word': 'the'}, {'Duration': 2900000, 'Offset': 5800000, 'Word': 'weather'}, {'Duration': 4700000, 'Offset': 8800000, 'Word': 'like'}]
Word Offset Duration
what's 600000 3800000
the 4500000 1200000
weather 5800000 2900000
like 8800000 4700000
Run Code Online (Sandbox Code Playgroud)
希望能帮助到你。
| 归档时间: |
|
| 查看次数: |
2729 次 |
| 最近记录: |