Assertion Error: Device index out of range (0 devices available; device index should be between 0 and -1 inclusive)
Assertion Error: Device index out of range (0 devices available; device index should be between 0 and -1 inclusive)
我正在从事语音识别项目。我正在使用 Google 语音识别 api。我已经使用 dockerfile 在 GCP flex 环境中部署了 django 项目。
Docker 文件:
FROM gcr.io/google-appengine/python
RUN apt-get update
RUN apt-get install libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0 -y
RUN apt-get install python3-pyaudio
RUN virtualenv -p python3.7 /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
ADD . /app
CMD gunicorn -b :$PORT main:app
app.yaml 文件:
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
获取语音输入的代码。
import speech_recognition as sr
r = sr.Recognizer()
with sr.Microphone(device_index=0) as source:
print("speak")
audio = r.listen(source)
try:
voice_data =" " + r.recognize_google(audio)
我收到错误:断言错误 - 设备索引超出范围(0 个设备可用;设备索引应介于 0 和 -1 之间)。
# set up PyAudio
self.pyaudio_module = self.get_pyaudio()
audio = self.pyaudio_module.PyAudio()
try:
count = audio.get_device_count() # obtain device count
if device_index is not None: # ensure device index is in range
assert 0 <= device_index < count, "Device index out of range ({} devices available; device index should be between 0 and {} inclusive)".format(count, count - 1) …
if sample_rate is None: # automatically set the sample rate to the hardware's default sample rate if not specified
device_info = audio.get_device_info_by_index(device_index) if device_index is not None else audio.get_default_input_device_info()
assert isinstance(device_info.get("defaultSampleRate"), (float, int)) and device_info["defaultSampleRate"] > 0, "Invalid device info returned from PyAudio: {}".format(device_info)
sample_rate = int(device_info["defaultSampleRate"])
except Exception:
audio.terminate()
我去url时检测不到音频设备。我需要检测来自托管网络应用程序的声音。我该怎么做才能解决这个问题?
似乎是因为AppEngine的VM实例中没有声卡导致的。即使安装了声音card/drivers,我想知道如何将麦克风设备连接到实例
这个问题用标签 google-speech-api
标记,但是 Speech API Client Libraries are not used in the code you shared. Instead, it is used the python package SpeechRecognition。假设你想使用 Speech API Client Libraries,你需要使用 streaming_recognize()
,恐怕你需要更改代码以从网络用户的麦克风而不是本地麦克风获取语音输入设备麦克风。
在this link中我们可以找到一个从文件中流式传输的示例,注意流式识别会即时转换语音数据并且不会等待操作像其他方法一样完成。我不是 python 专家,但在这个例子中,您需要更改此行以从其他来源(从网络用户的麦克风)读取:
with io.open('./hello.wav', 'rb') as stream:
您需要在网络应用程序中执行类似以下 (audio: true
) 的操作才能从用户的麦克风中读取内容,请参阅 this link 以获取更多参考:
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
使用此方法的完整示例是 Google Cloud Speech Node with Socket Playground guide. You might want to reuse some NodeJS code to connect it to your current python application. By the way, NodeJS is also available in AppEngine Flex。
我正在从事语音识别项目。我正在使用 Google 语音识别 api。我已经使用 dockerfile 在 GCP flex 环境中部署了 django 项目。
Docker 文件:
FROM gcr.io/google-appengine/python
RUN apt-get update
RUN apt-get install libasound-dev portaudio19-dev libportaudio2 libportaudiocpp0 -y
RUN apt-get install python3-pyaudio
RUN virtualenv -p python3.7 /env
ENV VIRTUAL_ENV /env
ENV PATH /env/bin:$PATH
ADD requirements.txt /app/requirements.txt
RUN pip install -r /app/requirements.txt
ADD . /app
CMD gunicorn -b :$PORT main:app
app.yaml 文件:
runtime: custom
env: flex
entrypoint: gunicorn -b :$PORT main:app
runtime_config:
python_version: 3
获取语音输入的代码。
import speech_recognition as sr
r = sr.Recognizer()
with sr.Microphone(device_index=0) as source:
print("speak")
audio = r.listen(source)
try:
voice_data =" " + r.recognize_google(audio)
我收到错误:断言错误 - 设备索引超出范围(0 个设备可用;设备索引应介于 0 和 -1 之间)。
# set up PyAudio
self.pyaudio_module = self.get_pyaudio()
audio = self.pyaudio_module.PyAudio()
try:
count = audio.get_device_count() # obtain device count
if device_index is not None: # ensure device index is in range
assert 0 <= device_index < count, "Device index out of range ({} devices available; device index should be between 0 and {} inclusive)".format(count, count - 1) …
if sample_rate is None: # automatically set the sample rate to the hardware's default sample rate if not specified
device_info = audio.get_device_info_by_index(device_index) if device_index is not None else audio.get_default_input_device_info()
assert isinstance(device_info.get("defaultSampleRate"), (float, int)) and device_info["defaultSampleRate"] > 0, "Invalid device info returned from PyAudio: {}".format(device_info)
sample_rate = int(device_info["defaultSampleRate"])
except Exception:
audio.terminate()
我去url时检测不到音频设备。我需要检测来自托管网络应用程序的声音。我该怎么做才能解决这个问题?
似乎是因为AppEngine的VM实例中没有声卡导致的。即使安装了声音card/drivers,我想知道如何将麦克风设备连接到实例
这个问题用标签 google-speech-api
标记,但是 Speech API Client Libraries are not used in the code you shared. Instead, it is used the python package SpeechRecognition。假设你想使用 Speech API Client Libraries,你需要使用 streaming_recognize()
,恐怕你需要更改代码以从网络用户的麦克风而不是本地麦克风获取语音输入设备麦克风。
在this link中我们可以找到一个从文件中流式传输的示例,注意流式识别会即时转换语音数据并且不会等待操作像其他方法一样完成。我不是 python 专家,但在这个例子中,您需要更改此行以从其他来源(从网络用户的麦克风)读取:
with io.open('./hello.wav', 'rb') as stream:
您需要在网络应用程序中执行类似以下 (audio: true
) 的操作才能从用户的麦克风中读取内容,请参阅 this link 以获取更多参考:
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
使用此方法的完整示例是 Google Cloud Speech Node with Socket Playground guide. You might want to reuse some NodeJS code to connect it to your current python application. By the way, NodeJS is also available in AppEngine Flex。