异步慢于同步

paw*_*lty 1 python python-3.x python-requests python-asyncio aiohttp

我的程序执行以下操作:

  • 获取txt文件的文件夹
  • 对于每个文件:
    • 读取文件
    • 使用文件内容对localhost中的API执行POST请求
    • 解析XML响应(不在下面的示例中)

我担心程序的同步版本的性能,所以试图aiohttp使其异步(这是我在Python中除了Scrapy之外的第一次异步编程尝试).原来,异步代码花了2倍的时间,我不明白为什么.

同步代码(152秒)

url = "http://localhost:6090/api/analyzexml"
package = #name of the package I send in each requests
with open("template.txt", "r", encoding="utf-8") as f:
    template = f.read()

articles_path = #location of my text files

def fetch(session, url, article_text):
    data = {"package": package, "data": template.format(article_text)}
    response = session.post(url, data=json.dumps(data))
    print(response.text)

files = glob(os.path.join(articles_path, "*.txt"))

with requests.Session() as s:
    for file in files:
        with open(file, "r", encoding="utf-8") as f:
                article_text = f.read()
        fetch(s, url, article_text)
Run Code Online (Sandbox Code Playgroud)

分析结果:

+--------+---------+----------+---------+----------+-------------------------------------------------------+
| ncalls | tottime | percall  | cumtime | percall  |               filename:lineno(function)               |
+--------+---------+----------+---------+----------+-------------------------------------------------------+
|    849 |   145.6 |   0.1715 |   145.6 |   0.1715 | ~:0(<method 'recv_into' of '_socket.socket' objects>) |
|      2 |   1.001 |   0.5007 |   1.001 |   0.5007 | ~:0(<method 'connect' of '_socket.socket' objects>)   |
|    365 |   0.772 | 0.002115 |   1.001 | 0.002742 | ~:0(<built-in method builtins.print>)                 |
+--------+---------+----------+---------+----------+-------------------------------------------------------+
Run Code Online (Sandbox Code Playgroud)

(WANNABE)异步代码(327秒)

async def fetch(session, url, article_text):
    data = {"package": package, "data": template.format(article_text)}
    async with session.post(url, data=json.dumps(data)) as response:
        return await response.text()

async def process_files(articles_path):
    tasks = []

    async with ClientSession() as session:
        files = glob(os.path.join(articles_path, "*.txt"))
        for file in files:
            with open(file, "r", encoding="utf-8") as f:
                article_text = f.read()
            task = asyncio.ensure_future(fetch(session=session, 
                                        url=url, 
                                        article_text=article_text
                                        ))
            tasks.append(task)
            responses = await asyncio.gather(*tasks)
            print(responses)


loop = asyncio.get_event_loop()
future = asyncio.ensure_future(process_files(articles_path))
loop.run_until_complete(future)
Run Code Online (Sandbox Code Playgroud)

分析结果:

 +--------+---------+---------+---------+---------+-----------------------------------------------+
    | ncalls | tottime | percall | cumtime | percall |           filename:lineno(function)           |
    +--------+---------+---------+---------+---------+-----------------------------------------------+
    |   2278 |     156 | 0.06849 |     156 | 0.06849 | ~:0(<built-in method select.select>)          |
    |    365 |   128.3 |  0.3516 |   168.9 |  0.4626 | ~:0(<built-in method builtins.print>)         |
    |    730 |   40.54 | 0.05553 |   40.54 | 0.05553 | ~:0(<built-in method _codecs.charmap_encode>) |
    +--------+---------+---------+---------+---------+-----------------------------------------------+
Run Code Online (Sandbox Code Playgroud)

我在这个概念中显然缺少一些东西.有人也可以帮助我理解为什么在异步版本中打印会花费这么多时间(请参阅分析).

900*_*000 5

因为它不是异步的:)

看看你的代码:你responses = await asyncio.gather(*tasks) 为每个文件,所以你基本上都是同步获取,每次都支付协同处理的所有代价.

我想这只是一个缩进错误; 如果你是unindent responses = await asyncio.gather(*tasks)以便它已经过了for file in files循环,那么你将真正tasks并行开始.