相关疑难解决方法(0)

asyncio web scraping 101:使用aiohttp获取多个url

在之前的问题中,其中一位作者aiohttp善意地建议使用以下新语法从aiohttp获取多个URL:async withPython 3.5

import aiohttp
import asyncio

async def fetch(session, url):
    with aiohttp.Timeout(10):
        async with session.get(url) as response:
            return await response.text()

async def fetch_all(session, urls, loop):
    results = await asyncio.wait([loop.create_task(fetch(session, url))
                                  for url in urls])
    return results

if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    # breaks because of the first url
    urls = ['http://SDFKHSKHGKLHSKLJHGSDFKSJH.com',
            'http://google.com',
            'http://twitter.com']
    with aiohttp.ClientSession(loop=loop) as session:
        the_results = loop.run_until_complete(
            fetch_all(session, urls, loop))
        # do something with the the_results
Run Code Online (Sandbox Code Playgroud)

但是,当其中一个session.get(url) …

python web-scraping python-3.x python-asyncio aiohttp

18
推荐指数
2
解决办法
6347
查看次数