new*_*ie 6 python machine-learning websocket keras tensorflow
我正在使用 keras 和 WebSockets 实现一个简单的聊天机器人。我现在有了一个模型,可以对用户输入进行预测并发送相应的答案。
当我通过命令行执行此操作时,它工作正常,但是当我尝试通过 WebSocket 发送答案时,WebSocket 甚至不再启动。
这是我的工作 WebSocket 代码:
@sock.route('/api')
def echo(sock):
while True:
# get user input from browser
user_input = sock.receive()
# print user input on console
print(user_input)
# read answer from console
response = input()
# send response to browser
sock.send(response)
Run Code Online (Sandbox Code Playgroud)
这是我在命令行上与 keras 模型通信的代码:
while True:
question = input("")
ints = predict(question)
answer = response(ints, json_data)
print(answer)
Run Code Online (Sandbox Code Playgroud)
使用的方法有:
def predict(sentence):
bag_of_words = convert_sentence_in_bag_of_words(sentence)
# pass bag as list and get index 0
prediction = model.predict(np.array([bag_of_words]))[0]
ERROR_THRESHOLD = 0.25
accepted_results = [[tag, probability] for tag, probability in enumerate(prediction) if probability > ERROR_THRESHOLD]
accepted_results.sort(key=lambda x: x[1], reverse=True)
output = []
for accepted_result in accepted_results:
output.append({'intent': classes[accepted_result[0]], 'probability': str(accepted_result[1])})
print(output)
return output
def response(intents, json):
tag = intents[0]['intent']
intents_as_list = json['intents']
for i in intents_as_list:
if i['tag'] == tag:
res = random.choice(i['responses'])
break
return res
Run Code Online (Sandbox Code Playgroud)
因此,当我使用工作代码启动 WebSocket 时,我得到以下输出:
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
* Serving Flask app 'server' (lazy loading)
* Environment: production
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Debug mode: on
Run Code Online (Sandbox Code Playgroud)
但是,一旦我在课堂上有了我的模型的任何内容,server.py我就会得到以下输出:
2022-02-13 11:31:38.887640: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:305] Could not identify NUMA node of platform GPU ID 0, defaulting to 0. Your kernel may not have been built with NUMA support.
2022-02-13 11:31:38.887734: I tensorflow/core/common_runtime/pluggable_device/pluggable_device_factory.cc:271] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 0 MB memory) -> physical PluggableDevice (device: 0, name: METAL, pci bus id: <undefined>)
Metal device set to: Apple M1
systemMemory: 16.00 GB
maxCacheSize: 5.33 GB
Run Code Online (Sandbox Code Playgroud)
当我在顶部有一个像这样的导入就足够了:from chatty import response, predict- 即使它们没有被使用。
new*_*ie 4
我很沮丧,我只是浪费了 2 天的时间来解决最愚蠢的问题(并修复)
我仍然有
while True:
question = input("")
ints = predict(question)
answer = response(ints, json_data)
print(answer)
Run Code Online (Sandbox Code Playgroud)
在我的模型文件中,所以服务器没有启动。修复方法是删除它,现在它工作正常。
| 归档时间: |
|
| 查看次数: |
314 次 |
| 最近记录: |