Jas*_*ca1 6 python image python-requests python-asyncio aiohttp
我正在使用aiohttp下载图像,并且想知道是否存在一种方法来限制尚未完成的打开请求的数量。这是我目前拥有的代码:
async def get_images(url, session):
chunk_size = 100
# Print statement to show when a request is being made.
print(f'Making request to {url}')
async with session.get(url=url) as r:
with open('path/name.png', 'wb') as file:
while True:
chunk = await r.content.read(chunk_size)
if not chunk:
break
file.write(chunk)
# List of urls to get images from
urls = [...]
conn = aiohttp.TCPConnector(limit=3)
loop = asyncio.get_event_loop()
session = aiohttp.ClientSession(connector=conn, loop=loop)
loop.run_until_complete(asyncio.gather(*(get_images(url, session=session) for url in urls)))
Run Code Online (Sandbox Code Playgroud)
问题是,我输入了打印语句,以显示每个请求的发出时间,并且它一次发出将近21个请求,而不是我希望限制的3个请求(即,一旦图像下载完成) ,它可以移至列表中的下一个网址进行获取)。我只是想知道我在做什么错。
您的限制设置可以正常运行。您在调试时犯了错误。
正如Mikhail Gerasimov在评论中指出的那样,您将print()
呼叫置于错误的位置-它必须在session.get()
上下文中。
为了确保遵守限制,我针对简单的日志记录服务器测试了您的代码-并且测试显示该服务器接收到的连接数恰好与您在中设置的连接数相同TCPConnector
。这是测试:
import asyncio
import aiohttp
loop = asyncio.get_event_loop()
class SilentServer(asyncio.Protocol):
def connection_made(self, transport):
# We will know when the connection is actually made:
print('SERVER |', transport.get_extra_info('peername'))
async def get_images(url, session):
chunk_size = 100
# This log doesn't guarantee that we will connect,
# session.get() will freeze if you reach TCPConnector limit
print(f'CLIENT | Making request to {url}')
async with session.get(url=url) as r:
while True:
chunk = await r.content.read(chunk_size)
if not chunk:
break
urls = [f'http://127.0.0.1:1337/{x}' for x in range(20)]
conn = aiohttp.TCPConnector(limit=3)
session = aiohttp.ClientSession(connector=conn, loop=loop)
async def test():
await loop.create_server(SilentServer, '127.0.0.1', 1337)
await asyncio.gather(*(get_images(url, session=session) for url in urls))
loop.run_until_complete(test())
Run Code Online (Sandbox Code Playgroud)