沃森 ASR python WebSocket

Watson ASR python WebSocket

我使用 python 使用 websocket-client 库实现的 Websockets,以便使用 Watson ASR 执行实时语音识别。该解决方案直到最近才有效,但大约一个月前它停止了工作。甚至没有握手。奇怪的是我没有更改代码(如下)。另一位使用不同帐户的同事也有同样的问题,因此我们认为我们的帐户没有任何问题。我已经就此与 IBM 联系,但由于没有握手,因此他们无法追踪他们这边是否有问题。 websocket的代码如下所示。

import websocket
(...)
ws = websocket.WebSocketApp(
   self.api_url,
   header=headers,
   on_message=self.on_message,
   on_error=self.on_error,
   on_close=self.on_close,
   on_open=self.on_open
)

其中 url 是 'wss://stream.watsonplatform.net/speech-to-text/api/v1/recognize',headers 是授权令牌,以及其他处理回调的函数和方法。目前发生的情况是此方法运行并等待连接超时。我想知道这个问题是否发生在其他人身上 运行 live ASR with Watson in Python 运行 这个 websocket-client 库。

感谢您提供 headers 信息。以下是它对我的作用。

我使用的是WebSocket-client0.54.0,目前是最新版本。我使用

生成了一个令牌
curl -u <USERNAME>:<PASSWORD>  "https://stream.watsonplatform.net/authorization/api/v1/token?url=https://stream.watsonplatform.net/speech-to-text/api"

使用下面代码中返回的令牌,我能够进行握手

import websocket

try:
    import thread
except ImportError:
    import _thread as thread
import time
import json


def on_message(ws, message):
    print(message)


def on_error(ws, error):
    print(error)


def on_close(ws):
    print("### closed ###")

def on_open(ws):
    def run(*args):
        for i in range(3):
            time.sleep(1)
            ws.send("Hello %d" % i)
        time.sleep(1)
        ws.close()
        print("thread terminating...")

    thread.start_new_thread(run, ())


if __name__ == "__main__":
    # headers["Authorization"] = "Basic " + base64.b64encode(auth.encode()).decode('utf-8')
    websocket.enableTrace(True)
    ws = websocket.WebSocketApp("wss://stream.watsonplatform.net/speech-to-text/api/v1/recognize",
                                on_message=on_message,
                                on_error=on_error,
                                on_close=on_close,
                                header={
                                    "X-Watson-Authorization-Token": <TOKEN>"})
    ws.on_open = on_open
    ws.run_forever()

回复:

--- request header ---
GET /speech-to-text/api/v1/recognize HTTP/1.1
Upgrade: websocket
Connection: Upgrade
Host: stream.watsonplatform.net
Origin: http://stream.watsonplatform.net
Sec-WebSocket-Key: Yuack3TM04/MPePJzvH8bA==
Sec-WebSocket-Version: 13
X-Watson-Authorization-Token: <TOKEN>


-----------------------
--- response header ---
HTTP/1.1 101 Switching Protocols
Date: Tue, 04 Dec 2018 12:13:57 GMT
Content-Type: application/octet-stream
Connection: upgrade
Upgrade: websocket
Sec-Websocket-Accept: 4te/E4t9+T8pBtxabmxrvPZfPfI=
x-global-transaction-id: a83c91fd1d100ff0cb2a6f50a7690694
X-DP-Watson-Tran-ID: a83c91fd1d100ff0cb2a6f50a7690694
-----------------------
send: b'\x81\x87\x9fd\xd9\xae\xd7\x01\xb5\xc2\xf0D\xe9'
Connection is already closed.
### closed ###

Process finished with exit code 0

根据RFC 6455,服务器应响应 101 切换协议,

The handshake from the server looks as follows:

    HTTP/1.1 101 Switching Protocols
    Upgrade: websocket
    Connection: Upgrade
    Sec-WebSocket-Accept: s3pPLMBiTxaQ9kYGzzhZRbK+xOo=
    Sec-WebSocket-Protocol: chat

此外,当我使用 ws:// 而不是 wss:// 时,我遇到了操作超时问题。

更新: 实时语音识别示例 - https://github.com/watson-developer-cloud/python-sdk/blob/master/examples/microphone-speech-to-text.py

@zedavid 一个多月前,我们改用 IAM,因此 usernamepassword 被 IAM apikey 取代。您应该将 Cloud Foundry Speech to Text 实例迁移到 IAM。有一个 Migration 页面可以帮助您了解更多相关信息。您还可以创建一个新的 Speech to Text 实例,默认情况下它将是一个资源控制实例。

获得新实例后,您将需要获得一个 access_token,它类似于 Cloud Foundry 中的 tokenaccess_token 将用于授权您的请求。

最后,我们最近在 Python SDK 中发布了对 Speech to Text 和 Text to Speech 的支持。我鼓励您使用它而不是编写用于令牌交换和 WebSocket 连接管理的代码。

service = SpeechToTextV1(
    iam_apikey='YOUR APIKEY',
    url='https://stream.watsonplatform.net/speech-to-text/api')

# Example using websockets
class MyRecognizeCallback(RecognizeCallback):
    def __init__(self):
        RecognizeCallback.__init__(self)

    def on_transcription(self, transcript):
        print(transcript)

    def on_connected(self):
        print('Connection was successful')

    def on_error(self, error):
        print('Error received: {}'.format(error))

    def on_inactivity_timeout(self, error):
        print('Inactivity timeout: {}'.format(error))

    def on_listening(self):
        print('Service is listening')

    def on_hypothesis(self, hypothesis):
        print(hypothesis)

    def on_data(self, data):
        print(data)

# Example using threads in a non-blocking way
mycallback = MyRecognizeCallback()
audio_file = open(join(dirname(__file__), '../resources/speech.wav'), 'rb')
audio_source = AudioSource(audio_file)
recognize_thread = threading.Thread(
    target=service.recognize_using_websocket,
    args=(audio_source, "audio/l16; rate=44100", mycallback))
recognize_thread.start()