为 Google 语音 API 创建合适的 WAV 文件
Creating suitable WAV files for Google Speech API
我正在使用 pyaudio 将我的声音录制为 wav 文件。我正在使用以下代码:
def voice_recorder():
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 22050
CHUNK = 1024
RECORD_SECONDS = 4
WAVE_OUTPUT_FILENAME = "first.wav"
audio = pyaudio.PyAudio()
# start Recording
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
frames_per_buffer=CHUNK)
print "konusun..."
frames = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
#print "finished recording"
# stop Recording
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(frames))
waveFile.close()
我正在为 Google Speech API 使用以下代码,它基本上将 WAV 文件中的语音转换为文本:https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/speech/api-client/transcribe.py
当我尝试将 pyaudio 生成的 wav 文件导入 Google 的代码时,出现以下错误:
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://speech.googleapis.com/v1beta1/speech:syncrecognize?alt=json returned "Invalid Configuration, Does not match Wav File Header.
Wav Header Contents:
Encoding: LINEAR16
Channels: 2
Sample Rate: 22050.
Request Contents:
Encoding: linear16
Channels: 1
Sample Rate: 22050.">
我正在使用以下解决方法:我正在使用 ffmpeg 将 WAV 文件转换为 MP3,然后我再次使用 sox 将 MP3 文件转换为 wav:
def wav_to_mp3():
FNULL = open(os.devnull, 'w')
subprocess.call(['ffmpeg', '-i', 'first.wav', '-ac', '1', '-ab', '6400', '-ar', '16000', 'second.mp3', '-y'], stdout=FNULL, stderr=subprocess.STDOUT)
def mp3_to_wav():
subprocess.call(['sox', 'second.mp3', '-r', '16000', 'son.wav'])
Google 的 API 适用于此 WAV 输出,但由于质量下降太多,因此效果不佳。
那么如何在第一步创建与 pyaudio Google 兼容的 WAV 文件?
使用 avconv 将 wav 文件转换为 flac 文件并将其发送到 Google Speech API 解决了问题
subprocess.call(['avconv', '-i', 'first.wav', '-y', '-ar', '48000', '-ac', '1', 'last.flac'])
我正在使用 pyaudio 将我的声音录制为 wav 文件。我正在使用以下代码:
def voice_recorder():
FORMAT = pyaudio.paInt16
CHANNELS = 2
RATE = 22050
CHUNK = 1024
RECORD_SECONDS = 4
WAVE_OUTPUT_FILENAME = "first.wav"
audio = pyaudio.PyAudio()
# start Recording
stream = audio.open(format=FORMAT, channels=CHANNELS,
rate=RATE, input=True,
frames_per_buffer=CHUNK)
print "konusun..."
frames = []
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
frames.append(data)
#print "finished recording"
# stop Recording
stream.stop_stream()
stream.close()
audio.terminate()
waveFile = wave.open(WAVE_OUTPUT_FILENAME, 'wb')
waveFile.setnchannels(CHANNELS)
waveFile.setsampwidth(audio.get_sample_size(FORMAT))
waveFile.setframerate(RATE)
waveFile.writeframes(b''.join(frames))
waveFile.close()
我正在为 Google Speech API 使用以下代码,它基本上将 WAV 文件中的语音转换为文本:https://github.com/GoogleCloudPlatform/python-docs-samples/blob/master/speech/api-client/transcribe.py
当我尝试将 pyaudio 生成的 wav 文件导入 Google 的代码时,出现以下错误:
googleapiclient.errors.HttpError: <HttpError 400 when requesting https://speech.googleapis.com/v1beta1/speech:syncrecognize?alt=json returned "Invalid Configuration, Does not match Wav File Header.
Wav Header Contents:
Encoding: LINEAR16
Channels: 2
Sample Rate: 22050.
Request Contents:
Encoding: linear16
Channels: 1
Sample Rate: 22050.">
我正在使用以下解决方法:我正在使用 ffmpeg 将 WAV 文件转换为 MP3,然后我再次使用 sox 将 MP3 文件转换为 wav:
def wav_to_mp3():
FNULL = open(os.devnull, 'w')
subprocess.call(['ffmpeg', '-i', 'first.wav', '-ac', '1', '-ab', '6400', '-ar', '16000', 'second.mp3', '-y'], stdout=FNULL, stderr=subprocess.STDOUT)
def mp3_to_wav():
subprocess.call(['sox', 'second.mp3', '-r', '16000', 'son.wav'])
Google 的 API 适用于此 WAV 输出,但由于质量下降太多,因此效果不佳。
那么如何在第一步创建与 pyaudio Google 兼容的 WAV 文件?
使用 avconv 将 wav 文件转换为 flac 文件并将其发送到 Google Speech API 解决了问题
subprocess.call(['avconv', '-i', 'first.wav', '-y', '-ar', '48000', '-ac', '1', 'last.flac'])