如何在 Flutter 中使用本地音频文件而不是语音中的 uri 到文本 API?
How to use a local audio file instead of uri in speech to text API in Flutter?
我正在使用 googleapis 包中的 Google Speech To Text API。但是我没有找到任何文档(对于 dart 和 flutter)来解释如何在发送 RecognizeRequest.fromJson 时使用应用程序资产文件夹中的本地音频文件作为音频数据。我想知道如何在代码中使用本地文件代替 _json 中的音频内容。提前致谢。
final httpClient = await clientViaServiceAccount(_credentials, _scopes);
try {
final speech2Text = SpeechApi(httpClient);
final _json = {
"config": {
"encoding": "FLAC",
"sampleRateHertz": 16000,
"languageCode": "en-US",
"enableWordTimeOffsets": false
},
"audio": {"uri": "gs://cloud-samples-tests/speech/brooklyn.flac"}
};
final _recognizeRequest = RecognizeRequest.fromJson(_json);
await speech2Text.speech.recognize(_recognizeRequest).then((response) {
for (var result in response.results) {
print(result.toJson());
}
});
} finally {
httpClient.close();
}
}
通过查看这个 google_speech 包的示例,我终于设法做到了。
- 将音频文件添加为资产 pubsepec.yaml:
assets:
- assets/brooklyn.flac
- 然后从资产复制文件:
Future<void> _copyFileFromAssets(String name) async {
var data = await rootBundle.load('assets/$name');
final directory = await getApplicationDocumentsDirectory();
final path = directory.path + '/$name';
await File(path).writeAsBytes(
data.buffer.asUint8List(data.offsetInBytes, data.lengthInBytes));
}
- 然后获取音频内容:
Future<List<int>> _getAudioContent(String name) async {
final directory = await getApplicationDocumentsDirectory();
final path = directory.path + '/$name';
if (!File(path).existsSync()) {
await _copyFileFromAssets(name);
}
return File(path).readAsBytesSync().toList();
}
- 现在将内容编码为 Base64:
final audio = await _getAudioContent('brooklyn.flac');
String audio64 = base64Encode(audio);
- 将编码后的字符串作为内容字符串传递到音频部分:
final _json = {
"config": {
"encoding": "FLAC",
"sampleRateHertz": 16000,
"languageCode": "en-US",
"enableWordTimeOffsets": false
},
// "audio": {"uri": "gs://cloud-samples-tests/speech/brooklyn.flac"}
"audio": {"content": audio64},
};
我希望这对遇到类似问题的人有所帮助。
我正在使用 googleapis 包中的 Google Speech To Text API。但是我没有找到任何文档(对于 dart 和 flutter)来解释如何在发送 RecognizeRequest.fromJson 时使用应用程序资产文件夹中的本地音频文件作为音频数据。我想知道如何在代码中使用本地文件代替 _json 中的音频内容。提前致谢。
final httpClient = await clientViaServiceAccount(_credentials, _scopes);
try {
final speech2Text = SpeechApi(httpClient);
final _json = {
"config": {
"encoding": "FLAC",
"sampleRateHertz": 16000,
"languageCode": "en-US",
"enableWordTimeOffsets": false
},
"audio": {"uri": "gs://cloud-samples-tests/speech/brooklyn.flac"}
};
final _recognizeRequest = RecognizeRequest.fromJson(_json);
await speech2Text.speech.recognize(_recognizeRequest).then((response) {
for (var result in response.results) {
print(result.toJson());
}
});
} finally {
httpClient.close();
}
}
通过查看这个 google_speech 包的示例,我终于设法做到了。
- 将音频文件添加为资产 pubsepec.yaml:
assets:
- assets/brooklyn.flac
- 然后从资产复制文件:
Future<void> _copyFileFromAssets(String name) async {
var data = await rootBundle.load('assets/$name');
final directory = await getApplicationDocumentsDirectory();
final path = directory.path + '/$name';
await File(path).writeAsBytes(
data.buffer.asUint8List(data.offsetInBytes, data.lengthInBytes));
}
- 然后获取音频内容:
Future<List<int>> _getAudioContent(String name) async {
final directory = await getApplicationDocumentsDirectory();
final path = directory.path + '/$name';
if (!File(path).existsSync()) {
await _copyFileFromAssets(name);
}
return File(path).readAsBytesSync().toList();
}
- 现在将内容编码为 Base64:
final audio = await _getAudioContent('brooklyn.flac');
String audio64 = base64Encode(audio);
- 将编码后的字符串作为内容字符串传递到音频部分:
final _json = {
"config": {
"encoding": "FLAC",
"sampleRateHertz": 16000,
"languageCode": "en-US",
"enableWordTimeOffsets": false
},
// "audio": {"uri": "gs://cloud-samples-tests/speech/brooklyn.flac"}
"audio": {"content": audio64},
};
我希望这对遇到类似问题的人有所帮助。