nos*_*tio 10 javascript speech-recognition node.js async-await google-cloud-speech
我希望能够结束 Google 语音到文本流(使用创建streamingRecognize),并取回待处理的 SR(语音识别)结果。
简而言之,相关的 Node.js 代码:
// create SR stream
const stream = speechClient.streamingRecognize(request);
// observe data event
const dataPromise = new Promise(resolve => stream.on('data', resolve));
// observe error event
const errorPromise = new Promise((resolve, reject) => stream.on('error', reject));
// observe finish event
const finishPromise = new Promise(resolve => stream.on('finish', resolve));
// send the audio
stream.write(audioChunk);
// for testing purposes only, give the SR stream 2 seconds to absorb the audio
await new Promise(resolve => setTimeout(resolve, 2000));
// end the SR stream gracefully, by observing the completion callback
const endPromise = util.promisify(callback => stream.end(callback))();
// a 5 seconds test timeout
const timeoutPromise = new Promise(resolve => setTimeout(resolve, 5000));
// finishPromise wins the race here
await Promise.race([
dataPromise, errorPromise, finishPromise, endPromise, timeoutPromise]);
// endPromise wins the race here
await Promise.race([
dataPromise, errorPromise, endPromise, timeoutPromise]);
// timeoutPromise wins the race here
await Promise.race([dataPromise, errorPromise, timeoutPromise]);
// I don't see any data or error events, dataPromise and errorPromise don't get settled
Run Code Online (Sandbox Code Playgroud)
我的经验是 SR 流成功结束,但我没有收到任何数据事件或错误事件。既没有dataPromise也没有errorPromise得到解决或拒绝。
如何发出音频结束信号、关闭 SR 流并仍然获得待处理的 SR 结果?
我需要坚持使用streamingRecognizeAPI,因为我正在流式传输的音频是实时的,即使它可能会突然停止。
澄清一下,只要我继续流式传输音频,它就可以工作,我确实会收到实时 SR 结果。但是,当我发送最终的音频块并像上面一样结束流时,我没有得到我期望的最终结果。
为了得到最终的结果,我实际上必须保持流媒体静音多几秒钟,这可能会增加 ST 账单。我觉得必须有更好的方法来获取它们。
更新:看起来,结束streamingRecognize流的唯一适当时间是发生在data事件 where StreamingRecognitionResult.is_finalis 上true。同样,我们似乎希望在data事件被触发之前继续流式传输音频,以获得最终或临时的任何结果。
这对我来说看起来像是一个错误,提交了一个问题。
更新:现在似乎已被确认为错误。在修复之前,我正在寻找潜在的解决方法。
更新:为了将来参考,这里是当前和以前跟踪的涉及streamingRecognize.
我希望这对于使用 的人来说是一个常见问题streamingRecognize,很惊讶以前没有报告过。也将其作为错误提交给issuetracker.google.com。
我的坏 \xe2\x80\x94 毫不奇怪,这变成了我的代码中一个模糊的竞争条件。
\n我已经整理了一个独立的示例,可以按预期工作(要点)。它帮助我追踪问题。希望它可以帮助其他人和我未来的自己:
\n// A simple streamingRecognize workflow,\n// tested with Node v15.0.1, by @noseratio\n\nimport fs from \'fs\';\nimport path from "path";\nimport url from \'url\'; \nimport util from "util";\nimport timers from \'timers/promises\';\nimport speech from \'@google-cloud/speech\';\n\nexport {}\n\n// need a 16-bit, 16KHz raw PCM audio \nconst filename = path.join(path.dirname(url.fileURLToPath(import.meta.url)), "sample.raw");\nconst encoding = \'LINEAR16\';\nconst sampleRateHertz = 16000;\nconst languageCode = \'en-US\';\n\nconst request = {\n config: {\n encoding: encoding,\n sampleRateHertz: sampleRateHertz,\n languageCode: languageCode,\n },\n interimResults: false // If you want interim results, set this to true\n};\n\n// init SpeechClient\nconst client = new speech.v1p1beta1.SpeechClient();\nawait client.initialize();\n\n// Stream the audio to the Google Cloud Speech API\nconst stream = client.streamingRecognize(request);\n\n// log all data\nstream.on(\'data\', data => {\n const result = data.results[0];\n console.log(`SR results, final: ${result.isFinal}, text: ${result.alternatives[0].transcript}`);\n});\n\n// log all errors\nstream.on(\'error\', error => {\n console.warn(`SR error: ${error.message}`);\n});\n\n// observe data event\nconst dataPromise = new Promise(resolve => stream.once(\'data\', resolve));\n\n// observe error event\nconst errorPromise = new Promise((resolve, reject) => stream.once(\'error\', reject));\n\n// observe finish event\nconst finishPromise = new Promise(resolve => stream.once(\'finish\', resolve));\n\n// observe close event\nconst closePromise = new Promise(resolve => stream.once(\'close\', resolve));\n\n// we could just pipe it: \n// fs.createReadStream(filename).pipe(stream);\n// but we want to simulate the web socket data\n\n// read RAW audio as Buffer\nconst data = await fs.promises.readFile(filename, null);\n\n// simulate multiple audio chunks\nconsole.log("Writting...");\nconst chunkSize = 4096;\nfor (let i = 0; i < data.length; i += chunkSize) {\n stream.write(data.slice(i, i + chunkSize));\n await timers.setTimeout(50);\n}\nconsole.log("Done writing.");\n\nconsole.log("Before ending...");\nawait util.promisify(c => stream.end(c))();\nconsole.log("After ending.");\n\n// race for events\nawait Promise.race([\n errorPromise.catch(() => console.log("error")), \n dataPromise.then(() => console.log("data")),\n closePromise.then(() => console.log("close")),\n finishPromise.then(() => console.log("finish"))\n]);\n\nconsole.log("Destroying...");\nstream.destroy();\nconsole.log("Final timeout...");\nawait timers.setTimeout(1000);\nconsole.log("Exiting.");\nRun Code Online (Sandbox Code Playgroud)\n输出:
\n\n正在写...\n写完了。\n结束前...\nSR结果,最终:true,文字:这是一个测试我正在测试语音识别,这就是结束\n结束后。\n数据\n完成\n销毁...\n最终超时...\n关闭\n退出。\n\n
要测试它,需要 16 位/16KHz 原始 PCM 音频文件。任意 WAV 文件都无法按原样工作,因为它包含带有元数据的标头。
\n| 归档时间: |
|
| 查看次数: |
1062 次 |
| 最近记录: |