Error: 7 PERMISSION_DENIED: Your application has authenticated using end user credentials from the Google Cloud SDK
Error: 7 PERMISSION_DENIED: Your application has authenticated using end user credentials from the Google Cloud SDK
几个月前,我的 websocket 服务器内部没有代码更改,但是今天使用它似乎 Google 语音到文本 api 不再允许使用访问令牌进行身份验证。
这是我以前的工作方法,直到我今天遇到这个错误
const client = new speech.SpeechClient({
access_token: ACCESS_TOKEN,
projectId: 'project-name'
});
这让我明白了标题中的上述错误。
我也尝试通过如下设置环境切换到服务帐户(我过去使用过)
export GOOGLE_APPLICATION_CREDENTIALS="path-to-key.json"
然后我 运行 客户端没有上面的代码而是 运行:
const client = new speech.SpeechClient();
这反而给我带来了这个美丽的错误,即使此时使用项目 ID 设置了环境
Error: Unable to detect a Project Id in the current environment.
如能帮助解决此问题,我们将不胜感激!
I was able to follow the Official Quickstart and got it working by using Client Libraries with no issues. I will explain what I did right below.
来自Cloud Speech-to-Text - Quickstart:
创建或select一个项目:
gcloud config set project YOUR_PROJECT_NAME
为当前项目启用 Cloud Speech-to-Text API:
gcloud services enable speech.googleapis.com
创建服务帐户:
gcloud iam service-accounts create [SA-NAME] \
--description "[SA-DESCRIPTION]" \
--display-name "[SA-DISPLAY-NAME]"
下载私钥为JSON:
gcloud iam service-accounts keys create ~/key.json \
--iam-account [SA-NAME]@[PROJECT-ID].iam.gserviceaccount.com
将环境变量 GOOGLE_APPLICATION_CREDENTIALS
设置为包含您的服务帐户密钥的 JSON 文件的文件路径:
export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"
安装客户端库
npm install --save @google-cloud/speech
创建了一个 quickstart.js
文件并将以下代码示例放入其中:
'use strict';
// [START speech_quickstart]
async function main() {
// Imports the Google Cloud client library
const speech = require('@google-cloud/speech');
const fs = require('fs');
// Creates a client
const client = new speech.SpeechClient();
// The name of the audio file to transcribe
const fileName = './resources/audio.raw';
// Reads a local audio file and converts it to base64
const file = fs.readFileSync(fileName);
const audioBytes = file.toString('base64');
// The audio file's encoding, sample rate in hertz, and BCP-47 language code
const audio = {
content: audioBytes,
};
const config = {
encoding: 'LINEAR16',
sampleRateHertz: 16000,
languageCode: 'en-US',
};
const request = {
audio: audio,
config: config,
};
// Detects speech in the audio file
const [response] = await client.recognize(request);
const transcription = response.results
.map(result => result.alternatives[0].transcript)
.join('\n');
console.log("Transcription: ${transcription}");
}
main().catch(console.error);
WHERE const fileName = './resources/audio.raw'
是您的 test.raw 音频所在的路径。
我通过执行以下操作解决了环境问题和后续错误:
const options = {
keyFilename: 'path-to-key.json',
projectId: 'project-name',
};
const client = new speech.SpeechClient(options);
几个月前,我的 websocket 服务器内部没有代码更改,但是今天使用它似乎 Google 语音到文本 api 不再允许使用访问令牌进行身份验证。
这是我以前的工作方法,直到我今天遇到这个错误
const client = new speech.SpeechClient({
access_token: ACCESS_TOKEN,
projectId: 'project-name'
});
这让我明白了标题中的上述错误。
我也尝试通过如下设置环境切换到服务帐户(我过去使用过)
export GOOGLE_APPLICATION_CREDENTIALS="path-to-key.json"
然后我 运行 客户端没有上面的代码而是 运行:
const client = new speech.SpeechClient();
这反而给我带来了这个美丽的错误,即使此时使用项目 ID 设置了环境
Error: Unable to detect a Project Id in the current environment.
如能帮助解决此问题,我们将不胜感激!
I was able to follow the Official Quickstart and got it working by using Client Libraries with no issues. I will explain what I did right below.
来自Cloud Speech-to-Text - Quickstart:
创建或select一个项目:
gcloud config set project YOUR_PROJECT_NAME
为当前项目启用 Cloud Speech-to-Text API:
gcloud services enable speech.googleapis.com
创建服务帐户:
gcloud iam service-accounts create [SA-NAME] \ --description "[SA-DESCRIPTION]" \ --display-name "[SA-DISPLAY-NAME]"
下载私钥为JSON:
gcloud iam service-accounts keys create ~/key.json \ --iam-account [SA-NAME]@[PROJECT-ID].iam.gserviceaccount.com
将环境变量
GOOGLE_APPLICATION_CREDENTIALS
设置为包含您的服务帐户密钥的 JSON 文件的文件路径:export GOOGLE_APPLICATION_CREDENTIALS="[PATH]"
安装客户端库
npm install --save @google-cloud/speech
创建了一个
quickstart.js
文件并将以下代码示例放入其中:'use strict';
// [START speech_quickstart] async function main() { // Imports the Google Cloud client library const speech = require('@google-cloud/speech'); const fs = require('fs');
// Creates a client const client = new speech.SpeechClient();
// The name of the audio file to transcribe const fileName = './resources/audio.raw';
// Reads a local audio file and converts it to base64 const file = fs.readFileSync(fileName); const audioBytes = file.toString('base64');
// The audio file's encoding, sample rate in hertz, and BCP-47 language code const audio = { content: audioBytes, }; const config = { encoding: 'LINEAR16', sampleRateHertz: 16000, languageCode: 'en-US', }; const request = { audio: audio, config: config, };
// Detects speech in the audio file const [response] = await client.recognize(request); const transcription = response.results .map(result => result.alternatives[0].transcript) .join('\n'); console.log("Transcription: ${transcription}"); } main().catch(console.error);
WHERE const fileName = './resources/audio.raw'
是您的 test.raw 音频所在的路径。
我通过执行以下操作解决了环境问题和后续错误:
const options = {
keyFilename: 'path-to-key.json',
projectId: 'project-name',
};
const client = new speech.SpeechClient(options);