尝试发送 keras 生成的答案时 WebSocket 不工作
WebSocket not working when trying to send generated answer by keras
我正在使用 keras 和 WebSockets 实现一个简单的聊天机器人。我现在有一个模型可以预测用户输入并发送相应的答案。
当我通过命令行执行时它工作正常,但是当我尝试通过我的 WebSocket 发送答案时,WebSocket 甚至不再启动。
这是我的工作 WebSocket 代码:
@sock.route('/api')
def echo(sock):
while True:
# get user input from browser
user_input = sock.receive()
# print user input on console
print(user_input)
# read answer from console
response = input()
# send response to browser
sock.send(response)
这是我在命令行上与 keras 模型通信的代码:
while True:
question = input("")
ints = predict(question)
answer = response(ints, json_data)
print(answer)
使用的方法是:
def predict(sentence):
bag_of_words = convert_sentence_in_bag_of_words(sentence)
# pass bag as list and get index 0
prediction = model.predict(np.array([bag_of_words]))[0]
ERROR_THRESHOLD = 0.25
accepted_results = [[tag, probability] for tag, probability in enumerate(prediction) if probability > ERROR_THRESHOLD]
accepted_results.sort(key=lambda x: x[1], reverse=True)
output = []
for accepted_result in accepted_results:
output.append({'intent': classes[accepted_result[0]], 'probability': str(accepted_result[1])})
print(output)
return output
def response(intents, json):
tag = intents[0]['intent']
intents_as_list = json['intents']
for i in intents_as_list:
if i['tag'] == tag:
res = random.choice(i['responses'])
break
return res
因此,当我使用工作代码启动 WebSocket 时,我得到以下输出:
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Serving Flask app 'server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
但是一旦我在 server.py
class 中有我的模型的任何东西,我就会得到这个输出:
2022-02-13 11:31:38.887640: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-02-13 11:31:38.887734: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Metal device set to: Apple M1
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
当我像这样在顶部导入时就足够了:from chatty import response, predict
- 即使它们未被使用。
你的websocket路由没有问题。你能分享一下你是如何触发这条路线的吗? Websocket 是一种不同的协议,我怀疑您正在使用 HTTP 客户端来测试 websocket。例如在 Postman 中:
Postman New Screen
HTTP 请求不同于 websocket 请求。所以,你应该使用合适的客户端来测试websocket。
我很沮丧,我只是在最愚蠢的问题上浪费了 2 天(并修复)
我还有
while True:
question = input("")
ints = predict(question)
answer = response(ints, json_data)
print(answer)
在我的模型文件中,所以服务器没有启动。修复是删除它,现在它工作正常。
我正在使用 keras 和 WebSockets 实现一个简单的聊天机器人。我现在有一个模型可以预测用户输入并发送相应的答案。
当我通过命令行执行时它工作正常,但是当我尝试通过我的 WebSocket 发送答案时,WebSocket 甚至不再启动。
这是我的工作 WebSocket 代码:
@sock.route('/api')
def echo(sock):
while True:
# get user input from browser
user_input = sock.receive()
# print user input on console
print(user_input)
# read answer from console
response = input()
# send response to browser
sock.send(response)
这是我在命令行上与 keras 模型通信的代码:
while True:
question = input("")
ints = predict(question)
answer = response(ints, json_data)
print(answer)
使用的方法是:
def predict(sentence):
bag_of_words = convert_sentence_in_bag_of_words(sentence)
# pass bag as list and get index 0
prediction = model.predict(np.array([bag_of_words]))[0]
ERROR_THRESHOLD = 0.25
accepted_results = [[tag, probability] for tag, probability in enumerate(prediction) if probability > ERROR_THRESHOLD]
accepted_results.sort(key=lambda x: x[1], reverse=True)
output = []
for accepted_result in accepted_results:
output.append({'intent': classes[accepted_result[0]], 'probability': str(accepted_result[1])})
print(output)
return output
def response(intents, json):
tag = intents[0]['intent']
intents_as_list = json['intents']
for i in intents_as_list:
if i['tag'] == tag:
res = random.choice(i['responses'])
break
return res
因此,当我使用工作代码启动 WebSocket 时,我得到以下输出:
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Serving Flask app 'server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
但是一旦我在 server.py
class 中有我的模型的任何东西,我就会得到这个输出:
2022-02-13 11:31:38.887640: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-02-13 11:31:38.887734: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Metal device set to: Apple M1
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
当我像这样在顶部导入时就足够了:from chatty import response, predict
- 即使它们未被使用。
你的websocket路由没有问题。你能分享一下你是如何触发这条路线的吗? Websocket 是一种不同的协议,我怀疑您正在使用 HTTP 客户端来测试 websocket。例如在 Postman 中:
Postman New Screen
HTTP 请求不同于 websocket 请求。所以,你应该使用合适的客户端来测试websocket。
我很沮丧,我只是在最愚蠢的问题上浪费了 2 天(并修复)
我还有
while True:
question = input("")
ints = predict(question)
answer = response(ints, json_data)
print(answer)
在我的模型文件中,所以服务器没有启动。修复是删除它,现在它工作正常。