我正在尝试在 Expo 上录制音频并使用 Google 的语音转文本服务获取其转录。
它已经在 iOS 上运行,但尚未在 Android 上运行。我认为这是 Android 的录制选项的问题。
我没有收到来自 Google 服务器的错误响应,只收到一个空对象。
这是代码:
const recordingOptions = {
// android not currently in use, but parameters are required
android: {
extension: ".m4a",
outputFormat: Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_MPEG_4,
audioEncoder: Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AAC,
sampleRate: 44100,
numberOfChannels: 1,
bitRate: 128000,
},
ios: {
extension: ".wav",
audioQuality: Audio.RECORDING_OPTION_IOS_AUDIO_QUALITY_HIGH,
sampleRate: 44100,
numberOfChannels: 1,
bitRate: 128000,
linearPCMBitDepth: 16,
linearPCMIsBigEndian: false,
linearPCMIsFloat: false,
},
};
const [recording, setRecording] = useState<Audio.Recording | null>(null);
const startRecording = async () => {
const { status } = await Permissions.askAsync(Permissions.AUDIO_RECORDING);
if (status !== "granted") return;
// some of these are not applicable, but are required
await Audio.setAudioModeAsync({
allowsRecordingIOS: true,
interruptionModeIOS: Audio.INTERRUPTION_MODE_IOS_DO_NOT_MIX,
playsInSilentModeIOS: true,
shouldDuckAndroid: true,
interruptionModeAndroid: Audio.INTERRUPTION_MODE_ANDROID_DO_NOT_MIX,
playThroughEarpieceAndroid: true,
});
const newRecording = new Audio.Recording();
try {
await newRecording.prepareToRecordAsync(recordingOptions);
await newRecording.startAsync();
} catch (error) {
console.log(error);
stopRecording();
}
setRecording(newRecording);
};
const stopRecording = async () => {
try {
await recording!.stopAndUnloadAsync();
} catch (error) {
// Do nothing -- we are already unloaded.
}
};
const getAudioTranscription = async () => {
try {
const info = await FileSystem.getInfoAsync(recording!.getURI()!);
console.log(`FILE INFO: ${JSON.stringify(info)}`);
const uri = info.uri;
await toDataUrl(uri, async function (base64content: string) {
if (Platform.OS == "ios")
base64content = base64content.replace("data:audio/vnd.wave;base64,", "");
else
base64content = base64content.replace("data:audio/aac;base64,", "");
console.log(recording?._options?.android)
const body = {
audio: {
content: base64content,
},
config: {
enableAutomaticPunctuation: true,
encoding: "LINEAR16",
languageCode: "pt-BR",
model: "default",
sampleRateHertz: 44100,
},
};
const transcriptResponse = await fetch(
"https://speech.googleapis.com/v1p1beta1/speech:recognize?key=MY_KEY",
{ method: "POST", body: JSON.stringify(body) }
);
const data = await transcriptResponse.json();
const userMessage = data.results && data.results[0].alternatives[0].transcript || "";
});
} catch (error) {
console.log("There was an error", error);
}
stopRecording();
};
这个组合肯定有效,尽管在得出这个结论之前我还遇到了很多其他问题让世博会表现良好。
extension: '.amr',
outputFormat: Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_AMR_WB,
audioEncoder: Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_AMR_WB,
sampleRate: 16000,
numberOfChannels: 1,
bitRate: 128000,
我遇到了同样的问题并找到了解决方案。我正在使用 expo 和谷歌云语音转文本服务,这是我在 android 中找到的用于录制的兼容录制选项。
// this is for the recordingOptions while creating the audio
android: {
extension: ".webm",
outputFormat: Audio.RECORDING_OPTION_ANDROID_OUTPUT_FORMAT_WEBM,
audioEncoder: Audio.RECORDING_OPTION_ANDROID_AUDIO_ENCODER_DEFAULT,
sampleRate: 16000,
numberOfChannels: 1,
bitRate: 64000,
},
// and for the request config u need to add this when calling the api
config: {
encoding: "WEBM_OPUS",
sampleRateHertz: 16000,
languageCode: "fr-FR",
},
这对我来说效果很好,对于 ios,你提供的那些选项效果很好,希望它也适合你!