cer*_*ais 14 python multilabel-classification fastapi uvicorn simpletransformers
我正在尝试运行一项使用简单变压器 Roberta 模型进行分类的服务。测试时,推理脚本/函数本身按预期工作。当我将其包含在快速 API 中时,它会关闭服务器。
\nuvicorn==0.11.8\nfastapi==0.61.1\nsimpletransformers==0.51.6\ncmd : uvicorn --host 0.0.0.0 --port 5000 src.main:app\nRun Code Online (Sandbox Code Playgroud)\n@app.get("/article_classify")\ndef classification(text:str):\n """function to classify article using a deep learning model.\n Returns:\n [type]: [description]\n """\n\n _,_,result = inference(text)\n return result\nRun Code Online (Sandbox Code Playgroud)\n错误 :
\nINFO: Started server process [8262]\nINFO: Waiting for application startup.\nINFO: Application startup complete.\nINFO: Uvicorn running on http://0.0.0.0:5000 (Press CTRL+C to quit)\nINFO: 127.0.0.1:36454 - "GET / HTTP/1.1" 200 OK\nINFO: 127.0.0.1:36454 - "GET /favicon.ico HTTP/1.1" 404 Not Found\nINFO: 127.0.0.1:36454 - "GET /docs HTTP/1.1" 200 OK\nINFO: 127.0.0.1:36454 - "GET /openapi.json HTTP/1.1" 200 OK\nbefore\n100%|\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88\xe2\x96\x88| 1/1 [00:00<00:00, 17.85it/s]\nINFO: Shutting down\nINFO: Finished server process [8262]\nRun Code Online (Sandbox Code Playgroud)\n推理脚本:
\nmodel_name = "checkpoint-3380-epoch-20"\nmodel = MultiLabelClassificationModel("roberta","src/outputs/"+model_name)\ndef inference(input_text,model_name="checkpoint-3380-epoch-20"):\n """Function to run inverence on one sample text"""\n #model = MultiLabelClassificationModel("roberta","src/outputs/"+model_name)\n all_tags =[]\n if isinstance(input_text,str):\n print("before")\n result ,output = model.predict([input_text])\n print(result)\n tags=[]\n for idx,each in enumerate(result[0]):\n if each==1:\n tags.append(classes[idx])\n all_tags.append(tags)\n elif isinstance(input_text,list):\n result ,output = model.predict(input_text)\n tags=[]\n for res in result : \n for idx,each in enumerate(res):\n if each==1:\n tags.append(classes[idx])\n all_tags.append(tags)\n\n return result,output,all_tags\nRun Code Online (Sandbox Code Playgroud)\n更新:尝试使用 Flask 并且服务正在工作,但是当在 Flask 顶部添加 uvicorn 时,它陷入了重新启动的循环中。
\n尽管公认的解决方案有效,但我想建议一个不那么老套的解决方案,uvicorn而是使用工人。
您可能想尝试添加--workers 4到您的内容中CMD,以便它显示:
uvicorn --host 0.0.0.0 --port 5000 --workers 4 src.main:app
Run Code Online (Sandbox Code Playgroud)
我通过显式使用多处理启动进程池解决了这个问题。
from multiprocessing import set_start_method
from multiprocessing import Process, Manager
try:
set_start_method('spawn')
except RuntimeError:
pass
@app.get("/article_classify")
def classification(text:str):
"""function to classify article using a deep learning model.
Returns:
[type]: [description]
"""
manager = Manager()
return_result = manager.dict()
# as the inference is failing
p = Process(target = inference,args=(text,return_result,))
p.start()
p.join()
# print(return_result)
result = return_result['all_tags']
return result
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
12762 次 |
| 最近记录: |