我必须发送大量HTTP请求,一旦所有HTTP请求都返回,程序就可以继续.听起来像是一场完美的比赛asyncio
.有点天真,我把我的电话包裹requests
在一个async
函数中,然后把它们给了asyncio
.这不起作用.
在线搜索后,我找到了两个解决方案:
asyncio
run_in_executor
为了更好地理解这一点,我写了一个小基准.服务器端是一个烧瓶程序,在回答请求之前等待0.1秒.
from flask import Flask
import time
app = Flask(__name__)
@app.route('/')
def hello_world():
time.sleep(0.1) // heavy calculations here :)
return 'Hello World!'
if __name__ == '__main__':
app.run()
Run Code Online (Sandbox Code Playgroud)
客户是我的基准
import requests
from time import perf_counter, sleep
# this is the baseline, sequential calls to requests.get
start = perf_counter()
for i in range(10):
r = requests.get("http://127.0.0.1:5000/")
stop = perf_counter()
print(f"synchronous took {stop-start} seconds") # 1.062 secs …
Run Code Online (Sandbox Code Playgroud) 我需要异步运行 20 个任务(每个任务运行相同的函数,但具有不同的参数)。每个任务都使用Python 的yfinance
API 模块。这是我当前的方法:
args
;每个元素都是要传递给相应任务的参数。get_data
,我将运行该函数 20 次,每次使用不同的参数。main
,用于asyncio.gather
异步运行 20 个任务。这是(伪)代码:
import asyncio
stocks = []
args = ['arg1', 'arg2', ... , 'arg20']
async def get_data(arg):
stock = Stock(arg)
# do some yfinance calls
return stock
async def main():
global stocks
tasks = [asyncio.ensure_future(get_data(arg)) for arg in args]
stocks = await asyncio.gather(*tasks)
asyncio.run(main())
print(stocks) # should be a list of 20 return values from the 20 …
Run Code Online (Sandbox Code Playgroud) get_1
在 FastAPI 上,我有一个调用下面的协程函数的端点get_2
。
get_1
用途await redis.get(key)
get_2
用途await asyncio.ensure_future(redis.get(key))
这两个函数在功能和性能方面有什么区别吗?
#redis.py
import asyncio
import aioredis
async def get_1(key):
redis = aioredis.from_url("redis://localhost")
value = await redis.get(key)
return value
async def get_2(key):
redis = aioredis.from_url("redis://localhost")
value = await asyncio.ensure_future(redis.get(key))
return value
Run Code Online (Sandbox Code Playgroud) 我正在尝试学习在Python中使用asyncio优化脚本。我的示例返回coroutine was never awaited
警告,您可以帮助您理解并找到解决方法吗?
import time
import datetime
import random
import asyncio
import aiohttp
import requests
def requete_bloquante(num):
print(f'Get {num}')
uid = requests.get("https://httpbin.org/uuid").json()['uuid']
print(f"Res {num}: {uid}")
def faire_toutes_les_requetes():
for x in range(10):
requete_bloquante(x)
print("Bloquant : ")
start = datetime.datetime.now()
faire_toutes_les_requetes()
exec_time = (datetime.datetime.now() - start).seconds
print(f"Pour faire 10 requêtes, ça prend {exec_time}s\n")
async def requete_sans_bloquer(num, session):
print(f'Get {num}')
async with session.get("https://httpbin.org/uuid") as response:
uid = (await response.json()['uuid'])
print(f"Res {num}: {uid}")
async def faire_toutes_les_requetes_sans_bloquer():
loop = asyncio.get_event_loop()
with aiohttp.ClientSession() …
Run Code Online (Sandbox Code Playgroud) 我正在尝试打开多个 Web 会话并将数据保存到 CSV 中,使用 for 循环和 requests.get 选项编写了我的代码,但是访问 90 个 Web 位置需要很长时间。谁能告诉我整个过程如何为 loc_var 并行运行:
代码运行良好,只是问题是针对 loc_var 一一运行的,而且花费了很长时间。
想要并行访问所有for循环loc_var URL并写入CSV操作
下面是代码:
import pandas as pd
import numpy as np
import os
import requests
import datetime
import zipfile
t=datetime.date.today()-datetime.timedelta(2)
server = [("A","web1",":5000","username=usr&password=p7Tdfr")]
'''List of all web_ips'''
web_1 = ["Web1","Web2","Web3","Web4","Web5","Web6","Web7","Web8","Web9","Web10","Web11","Web12","Web13","Web14","Web15"]
'''List of All location'''
loc_var =["post1","post2","post3","post4","post5","post6","post7","post8","post9","post10","post11","post12","post13","post14","post15","post16","post17","post18"]
for s,web,port,usr in server:
login_url='http://'+web+port+'/api/v1/system/login/?'+usr
print (login_url)
s= requests.session()
login_response = s.post(login_url)
print("login Responce",login_response)
#Start access the Web for Loc_variable
for mkt in loc_var:
#output is …
Run Code Online (Sandbox Code Playgroud) asyncio模块的描述是:
该模块提供了使用协同程序编写单线程并发代码,通过套接字和其他资源多路复用I/O访问,运行网络客户端和服务器以及其他相关原语的基础结构.
我一直在阅读有关新的和非凡的asyncio python模块/包/无论如何.我知道有python GIL,因此asyncio非常适合GIL,因为(目的是)你在一个线程上用事件循环来管理事物.什么是并发?好吧,似乎I/O似乎是并发的.我相信I/O操作由操作系统处理.那么在asyncio的内部,它是否编写了一个利用操作系统给出的函数的并发C扩展?
python ×5
asynchronous ×3
aiohttp ×2
aioredis ×1
concurrency ×1
coroutine ×1
fastapi ×1
pandas ×1
python-3.x ×1
yfinance ×1