我试图从Fleck websocket音频流中获得最终的语音转录/识别结果。方法OnOpen
在首次建立websocket连接时执行代码,并且OnBinary
方法每当从客户端接收二进制数据时执行代码。我已经通过将语音回显到websocket并将相同的二进制数据以相同的速率写回websocket来测试websocket。此测试有效,因此我知道二进制数据正在正确发送(640字节消息,帧大小为20ms)。
因此,我的代码失败而不是服务。我的目标是做到以下几点:
SingleUtterance == true
将初始音频配置请求发送到APIisFinal == true
的流式传输结果isFinal == true
时,停止当前的流请求并创建新请求 - 重复步骤1到4该项目的背景是在实时电话中抄录所有单个话语。
socket.OnOpen = () =>
{
firstMessage = true;
};
socket.OnBinary = async binary =>
{
var speech = SpeechClient.Create();
var streamingCall = speech.StreamingRecognize();
if (firstMessage == true)
{
await streamingCall.WriteAsync(
new StreamingRecognizeRequest()
{
StreamingConfig = new StreamingRecognitionConfig()
{
Config = new RecognitionConfig()
{
Encoding = RecognitionConfig.Types.AudioEncoding.Linear16,
SampleRateHertz = 16000,
LanguageCode = "en",
},
SingleUtterance = true,
}
});
Task getUtterance = Task.Run(async () =>
{
while (await streamingCall.ResponseStream.MoveNext(
default(CancellationToken)))
{
foreach (var result in streamingCall.ResponseStream.Current.Results)
{
if (result.IsFinal == true)
{
Console.WriteLine("This test finally worked");
}
}
}
});
firstMessage = false;
}
else if (firstMessage == false)
{
streamingCall.WriteAsync(new StreamingRecognizeRequest()
{
AudioContent = Google.Protobuf.ByteString.CopyFrom(binary, 0, 640)
}).Wait();
}
};
.Wait()
是在异步/等待中调用的阻塞调用。它们混合不好并且可能导致死锁。
只需将代码保持异步即可
//...omitted for brevity
else if (firstMessage == false) {
await streamingCall.WriteAsync(new StreamingRecognizeRequest() {
AudioContent = Google.Protobuf.ByteString.CopyFrom(binary, 0, 640)
});
}