我正在尝试使用google_speech1 for Rust,但文档提供了不完整的示例,这对我来说非常困难,在Rust和使用Google Speech Api时都是新手,想出如何发送语音请求发送语音。
更具体地说,我希望能够发送本地音频文件,指明源语言并检索转录。
这是我在官方文档(https://docs.rs/google-speech1/1.0.8+20181005/google_speech1/struct.SpeechRecognizeCall.html)中找到的最接近的:
use speech1::RecognizeRequest;
// As the method needs a request, you would usually fill it with the desired information
// into the respective structure. Some of the parts shown here might not be applicable !
// Values shown here are possibly random and not representative !
let mut req = RecognizeRequest::default();
// You can configure optional parameters by calling the respective setters at will, and
// execute the final call using `doit()`.
// Values shown here are possibly random and not representative !
let result = hub.speech().recognize(req)
.doit();
更新退后一步,即使网站上提供的简单示例似乎也无法正常运行。以下是一些非常基本的代码示例:
pub mod speech_api_demo {
extern crate google_speech1 as speech1;
extern crate hyper;
extern crate hyper_rustls;
extern crate yup_oauth2 as oauth2;
use oauth2::{ApplicationSecret, Authenticator, DefaultAuthenticatorDelegate, MemoryStorage};
use speech1::Speech;
use speech1::{Error, Result};
use std::fs::File;
use std::io::Read;
#[derive(Deserialize, Serialize, Default)]
pub struct ConsoleApplicationSecret {
pub web: Option<ApplicationSecret>,
pub installed: Option<ApplicationSecret>,
}
pub fn speech_sample_demo() {
/*
Custom code to generate application secret
*/
let mut file =
File::open("C:\\Users\\YOURNAME\\.google-service-cli\\speech1-secret.json").unwrap();
let mut data = String::new();
file.read_to_string(&mut data).unwrap();
use serde_json as json;
let my_console_secret = json::from_str::<ConsoleApplicationSecret>(&data);
assert!(my_console_secret.is_ok());
let unwrappedConsoleSecret = my_console_secret.unwrap();
assert!(unwrappedConsoleSecret.installed.is_some() && unwrappedConsoleSecret.web.is_none());
let secret: ApplicationSecret = unwrappedConsoleSecret.installed.unwrap();
/*
Custom code to generate application secret - END
*/
// Instantiate the authenticator. It will choose a suitable authentication flow for you,
// unless you replace `None` with the desired Flow.
// Provide your own `AuthenticatorDelegate` to adjust the way it operates and get feedback about
// what's going on. You probably want to bring in your own `TokenStorage` to persist tokens and
// retrieve them from storage.
let auth = Authenticator::new(
&secret,
DefaultAuthenticatorDelegate,
hyper::Client::with_connector(hyper::net::HttpsConnector::new(
hyper_rustls::TlsClient::new(),
)),
<MemoryStorage as Default>::default(),
None,
);
let mut hub = Speech::new(
hyper::Client::with_connector(hyper::net::HttpsConnector::new(
hyper_rustls::TlsClient::new(),
)),
auth,
);
let result = hub.operations().get("name").doit();
match result {
Err(e) => match e {
// The Error enum provides details about what exactly happened.
// You can also just use its `Debug`, `Display` or `Error` traits
Error::HttpError(_)
| Error::MissingAPIKey
| Error::MissingToken(_)
| Error::Cancelled
| Error::UploadSizeLimitExceeded(_, _)
| Error::Failure(_)
| Error::BadRequest(_)
| Error::FieldClash(_)
| Error::JsonDecodeError(_, _) => (println!("{}", e)),
},
Ok(res) => println!("Success: {:?}", res),
}
}
}
运行此代码(调用speech_sample_demo)会出现以下错误:
令牌检索失败并显示错误:无效范围:'未提供说明'
我还尝试了一些非常难看的代码来强制范围进入请求,但它没有任何区别。我很难理解这个错误意味着什么。我是否在我的请求中遗漏了某些内容,还是在另一端阻碍了其他事情?或者也许api代码库刚刚破解?
另请注意,默认情况下提供的客户端ID和客户端密码不再起作用,当我使用那些帐户被删除时。
然后我设置了一个OAuth 2.0客户端并生成了我复制到默认位置的json文件,然后开始得到上面的错误。也许只是我没有正确设置Google Api帐户,但无论如何,如果其他人可以试一试,看看我是否是唯一一个有这些问题的人。
一旦我完成了这样一个简单的请求,我就会有更多的代码准备好通过音频文件进行测试,但是现在它在这个过程中很早就失败了。
您获得的错误源自here,意味着您在生成凭据文件时使用的OAuth scope不允许您访问Google语音API。因此问题不在您的Rust代码中,而是在您用于生成OAuth访问令牌的脚本中。
基本上,这意味着当您生成OAuth json文件时,您以一般方式请求访问Google API,但您没有说明您要使用哪些特定API。根据this document,您需要请求访问https://www.googleapis.com/auth/cloud-platform
范围。