架构 Flask 与 FastAPI

Jus*_*eoh 24 python wsgi flask asgi fastapi

我一直在修改 Flask 和 FastAPI 以了解它如何充当服务器。
我想知道的主要事情之一是 Flask 和 FastAPI 如何处理来自多个客户端的多个请求。
特别是当代码有效率问题时(数据库查询时间长)。

所以,我尝试制作一个简单的代码来理解这个问题。
代码很简单,当客户端访问路由时,应用程序休眠10秒才返回结果。
它看起来像这样:

快速API

import uvicorn
from fastapi import FastAPI
from time import sleep
app = FastAPI()

@app.get('/')
async def root():
    print('Sleeping for 10')
    sleep(10)
    print('Awake')
    return {'message': 'hello'}

if __name__ == "__main__":
    uvicorn.run(app, host="127.0.0.1", port=8000)
Run Code Online (Sandbox Code Playgroud)

烧瓶

from flask import Flask
from flask_restful import Resource, Api
from time import sleep

app = Flask(__name__)
api = Api(app)

class Root(Resource):
    def get(self):
        print('Sleeping for 10')
        sleep(10)
        print('Awake')
        return {'message': 'hello'}

api.add_resource(Root, '/')

if __name__ == "__main__":
    app.run()
Run Code Online (Sandbox Code Playgroud)

应用程序启动后,我尝试通过 2 个不同的 chrome 客户端同时访问它们。以下是结果:

快速API

在此处输入图片说明

烧瓶

在此处输入图片说明

如您所见,对于 FastAPI,代码在处理下一个请求之前首先等待 10 秒。而对于 Flask,代码在 10 秒睡眠仍在发生时处理下一个请求。

尽管做了一些谷歌搜索,关于这个话题并没有真正的直接答案。
如果有人有任何评论可以对此有所了解,请将它们放在评论中。

您的意见都值得赞赏。非常感谢大家的时间。

编辑 对此的更新,我正在探索更多并发现流程管理器的这个概念。例如,我们可以使用进程管理器 (gunicorn) 运行 uvicorn。通过添加更多的工人,我能够实现类似 Flask 的功能。但是,仍在测试此限制。 https://www.uvicorn.org/deployment/

感谢所有留下评论的人!欣赏它。

Yag*_*nci 34

这看起来有点有趣,所以我运行了一些测试ApacheBench

烧瓶

from flask import Flask
from flask_restful import Resource, Api


app = Flask(__name__)
api = Api(app)


class Root(Resource):
    def get(self):
        return {"message": "hello"}


api.add_resource(Root, "/")
Run Code Online (Sandbox Code Playgroud)

快速API

from fastapi import FastAPI


app = FastAPI(debug=False)


@app.get("/")
async def root():
    return {"message": "hello"}
Run Code Online (Sandbox Code Playgroud)

我为 FastAPI 运行了 2 个测试,有很大的不同:

  1. gunicorn -w 4 -k uvicorn.workers.UvicornWorker fast_api:app
  2. uvicorn fast_api:app --reload

所以这里是并发为 500 的 5000 个请求的基准测试结果:

带有 Uvicorn Worker 的 FastAPI

Concurrency Level:      500
Time taken for tests:   0.577 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      720000 bytes
HTML transferred:       95000 bytes
Requests per second:    8665.48 [#/sec] (mean)
Time per request:       57.700 [ms] (mean)
Time per request:       0.115 [ms] (mean, across all concurrent requests)
Transfer rate:          1218.58 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    6   4.5      6      30
Processing:     6   49  21.7     45     126
Waiting:        1   42  19.0     39     124
Total:         12   56  21.8     53     127

Percentage of the requests served within a certain time (ms)
  50%     53
  66%     64
  75%     69
  80%     73
  90%     81
  95%     98
  98%    112
  99%    116
 100%    127 (longest request)
Run Code Online (Sandbox Code Playgroud)

FastAPI - 纯 Uvicorn

Concurrency Level:      500
Time taken for tests:   1.562 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      720000 bytes
HTML transferred:       95000 bytes
Requests per second:    3200.62 [#/sec] (mean)
Time per request:       156.220 [ms] (mean)
Time per request:       0.312 [ms] (mean, across all concurrent requests)
Transfer rate:          450.09 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    8   4.8      7      24
Processing:    26  144  13.1    143     195
Waiting:        2  132  13.1    130     181
Total:         26  152  12.6    150     203

Percentage of the requests served within a certain time (ms)
  50%    150
  66%    155
  75%    158
  80%    160
  90%    166
  95%    171
  98%    195
  99%    199
 100%    203 (longest request)
Run Code Online (Sandbox Code Playgroud)

对于烧瓶

Concurrency Level:      500
Time taken for tests:   27.827 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      830000 bytes
HTML transferred:       105000 bytes
Requests per second:    179.68 [#/sec] (mean)
Time per request:       2782.653 [ms] (mean)
Time per request:       5.565 [ms] (mean, across all concurrent requests)
Transfer rate:          29.13 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   87 293.2      0    3047
Processing:    14 1140 4131.5    136   26794
Waiting:        1 1140 4131.5    135   26794
Total:         14 1227 4359.9    136   27819

Percentage of the requests served within a certain time (ms)
  50%    136
  66%    148
  75%    179
  80%    198
  90%    295
  95%   7839
  98%  14518
  99%  27765
 100%  27819 (longest request)
Run Code Online (Sandbox Code Playgroud)

总成绩

Flask测试时间:27.827 秒

FastAPI - Uvicorn测试时间:1.562 秒

FastAPI - Uvicorn Workers测试时间:0.577 秒


使用 Uvicorn Workers FastAPI比 Flask 快近48 倍,这很好理解。ASGI vs WSGI,所以我运行了 1 个并发:

FastAPI - UvicornWorkers测试时间:1.615 秒

FastAPI - Pure Uvicorn测试时间:2.681 秒

Flask测试时间:5.541 秒

我运行了更多的测试来用生产服务器测试 Flask。

5000 请求 1000 并发

与女服务员的烧瓶

Server Software:        waitress
Server Hostname:        127.0.0.1
Server Port:            8000

Document Path:          /
Document Length:        21 bytes

Concurrency Level:      1000
Time taken for tests:   3.403 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      830000 bytes
HTML transferred:       105000 bytes
Requests per second:    1469.47 [#/sec] (mean)
Time per request:       680.516 [ms] (mean)
Time per request:       0.681 [ms] (mean, across all concurrent requests)
Transfer rate:          238.22 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    4   8.6      0      30
Processing:    31  607 156.3    659     754
Waiting:        1  607 156.3    658     753
Total:         31  611 148.4    660     754

Percentage of the requests served within a certain time (ms)
  50%    660
  66%    678
  75%    685
  80%    691
  90%    702
  95%    728
  98%    743
  99%    750
 100%    754 (longest request)
Run Code Online (Sandbox Code Playgroud)

Gunicorn 与 Uvicorn 工人

Server Software:        uvicorn
Server Hostname:        127.0.0.1
Server Port:            8000

Document Path:          /
Document Length:        19 bytes

Concurrency Level:      1000
Time taken for tests:   0.634 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      720000 bytes
HTML transferred:       95000 bytes
Requests per second:    7891.28 [#/sec] (mean)
Time per request:       126.722 [ms] (mean)
Time per request:       0.127 [ms] (mean, across all concurrent requests)
Transfer rate:          1109.71 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   28  13.8     30      62
Processing:    18   89  35.6     86     203
Waiting:        1   75  33.3     70     171
Total:         20  118  34.4    116     243

Percentage of the requests served within a certain time (ms)
  50%    116
  66%    126
  75%    133
  80%    137
  90%    161
  95%    189
  98%    217
  99%    230
 100%    243 (longest request)
Run Code Online (Sandbox Code Playgroud)

纯 Uvicorn,但这次是 4 个工人 uvicorn fastapi:app --workers 4

Server Software:        uvicorn
Server Hostname:        127.0.0.1
Server Port:            8000

Document Path:          /
Document Length:        19 bytes

Concurrency Level:      1000
Time taken for tests:   1.147 seconds
Complete requests:      5000
Failed requests:        0
Total transferred:      720000 bytes
HTML transferred:       95000 bytes
Requests per second:    4359.68 [#/sec] (mean)
Time per request:       229.375 [ms] (mean)
Time per request:       0.229 [ms] (mean, across all concurrent requests)
Transfer rate:          613.08 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0   20  16.3     17      70
Processing:    17  190  96.8    171     501
Waiting:        3  173  93.0    151     448
Total:         51  210  96.4    184     533

Percentage of the requests served within a certain time (ms)
  50%    184
  66%    209
  75%    241
  80%    260
  90%    324
  95%    476
  98%    504
  99%    514
 100%    533 (longest request)
Run Code Online (Sandbox Code Playgroud)

  • 你好,亚吉兹坎!非常感谢您的详尽分析!基准测试结果对于它们之间的专业差异提供了非常丰富的信息。欣赏它! (4认同)
  • 我也这么想。您正在将像 uvicorn 这样的生产就绪服务器与像 Werkzeug 这样的开发服务器进行比较。您应该与使用 waitress 运行的 Flask 进行比较,这是部署生产就绪 Flask 应用程序的推荐方法:https://flask.palletsprojects.com/en/1.1.x/tutorial/deploy/#run-with-a-product -服务器 (3认同)
  • @marianobianchi 太棒了,实际上我从OP的代码中进行了基准测试,但我将与女服务员再次运行测试并用新结果更新问题,谢谢! (3认同)
  • @YagizDegirmenci:为什么不将苹果与苹果进行比较,并使用“gunicorn”运行 Flask(以及相同数量的工人)。在我看来,这比使用“waitress”更加公平(参见:https://stackshare.io/stackups/gunicorn-vs-waitress)。 (2认同)

zha*_*nov 12

为什么代码很慢

阻塞操作将停止运行任务的事件循环。当您调用该sleep()函数时,所有任务(请求)都在等待直到它完成,从而消除了异步代码执行的所有好处。

为了理解为什么这段代码进行比较是错误的,我们应该更好地理解异步代码在Python中是如何工作的,并对GIL有一些了解。FastAPI 的文档中很好地解释了并发和异步代码。

@Asotos 已经描述了为什么你的代码很慢,是的,你应该使用协程进行 I/O 操作,因为它们不会阻止事件循环的执行(time.sleep()是一个阻塞 I/O 操作,但协程asyncio.sleep()是一个非阻塞 I/操作)。合理建议使用异步函数,这样事件循环就不会被阻塞,但目前,并非所有库都有异步版本。

没有async功能的优化和asyncio.sleep

如果您无法使用该库的异步版本,您可以简单地将路由函数定义为简单def函数,而不是async def.

如果将路由函数定义为同步(def),FastAPI将智能地在外部线程池中调用此函数,并且具有事件循环的主线程将不会被阻塞,并且在不使用await asyncio.sleep(). 节对此进行了很好的解释。

解决方案

from time import sleep

import uvicorn
from fastapi import FastAPI


app = FastAPI()

@app.get('/')
def root():
    print('Sleeping for 10')
    sleep(10)
    print('Awake')
    return {'message': 'hello'}

if __name__ == "__main__":
    uvicorn.run(app, host="127.0.0.1", port=8000)
Run Code Online (Sandbox Code Playgroud)

顺便说一句,如果由于GIL而在线程池中运行的操作受到 CPU 限制(例如繁重的计算),您将不会获得很多好处。CPU 密集型任务必须在单独的进程中运行。


小智 9

我认为您正在阻塞 FastAPI 中的事件队列,这是异步框架,而在 Flask 中,请求可能每个都在新线程中运行。将所有 CPU 密集型任务移至单独的进程,或者在 FastAPI 示例中仅在事件循环上休眠(此处不要使用 time.sleep)。在 FastAPI 中异步运行 IO 绑定任务


Aso*_*tos 9

You are using the time.sleep() function, in a async endpoint. time.sleep() is blocking and should never be used in asynchronous code. What you should be using is probably the asyncio.sleep() function:

import asyncio
import uvicorn
from fastapi import FastAPI
app = FastAPI()

@app.get('/')
async def root():
    print('Sleeping for 10')
    await asyncio.sleep(10)
    print('Awake')
    return {'message': 'hello'}

if __name__ == "__main__":
    uvicorn.run(app, host="127.0.0.1", port=8000)
Run Code Online (Sandbox Code Playgroud)

That way, each request will take ~10 sec to complete, but you will be able to server multiple requests concurrently.

In general, async frameworks offer replacements for all blocking functions inside the standard library (sleep functions, IO functions, etc.). You are meant to use those replacements when writing async code and (optionally) await them.

Some non-blocking frameworks and libraries such as gevent, do not offer replacements. They instead monkey-patch functions in the standard library to make them non-blocking. This is not the case, as far as I know, for the newer async frameworks and libraries though, because they are meant to allow the developer to use the async-await syntax.