在 Asyncio Web 抓取应用程序中将 BeautifulSoup 代码放在哪里

Mil*_*ell 5 python asynchronous beautifulsoup python-asyncio aiohttp

我需要抓取并获取许多(每天 5-10k)新闻文章的正文段落的原始文本。我已经编写了一些线程代码,但考虑到这个项目的高度 I/O 限制性质,我正在涉足asyncio. 下面的代码片段并不比 1 线程版本快,而且比我的线程版本差得多。谁能告诉我我做错了什么?谢谢你!

async def fetch(session,url):
    async with session.get(url) as response:
        return await response.text()

async def scrape_urls(urls):
    results = []
    tasks = []
    async with aiohttp.ClientSession() as session:
        for url in urls:
            html = await fetch(session,url)
            soup = BeautifulSoup(html,'html.parser')
            body = soup.find('div', attrs={'class':'entry-content'})
            paras = [normalize('NFKD',para.get_text()) for para in body.find_all('p')]
            results.append(paras)
    return results
Run Code Online (Sandbox Code Playgroud)

use*_*342 12

await means "wait until the result is ready", so when you await the fetching in each loop iteration, you request (and get) sequential execution. To parallelize fetching, you need to spawn each fetch into a background tasks using something like asyncio.create_task(fetch(...)), and then await them, similar to how you'd do it with threads. Or even more simply, you can let the asyncio.gather convenience function do it for you. For example (untested):

async def fetch(session, url):
    async with session.get(url) as response:
        return await response.text()

def parse(html):
    soup = BeautifulSoup(html,'html.parser')
    body = soup.find('div', attrs={'class':'entry-content'})
    return [normalize('NFKD',para.get_text())
            for para in body.find_all('p')]

async def fetch_and_parse(session, url):
    html = await fetch(session, url)
    paras = parse(html)
    return paras

async def scrape_urls(urls):
    async with aiohttp.ClientSession() as session:
        return await asyncio.gather(
            *(fetch_and_parse(session, url) for url in urls)
        )
Run Code Online (Sandbox Code Playgroud)

If you find that this still runs slower than the multi-threaded version, it is possible that the parsing of HTML is slowing down the IO-related work. (Asyncio runs everything in a single thread by default.) To prevent CPU-bound code from interfering with asyncio, you can move the parsing to a separate thread using run_in_executor:

async def fetch_and_parse(session, url):
    html = await fetch(session, url)
    loop = asyncio.get_event_loop()
    # run parse(html) in a separate thread, and
    # resume this coroutine when it completes
    paras = await loop.run_in_executor(None, parse, html)
    return paras
Run Code Online (Sandbox Code Playgroud)

Note that run_in_executor must be awaited because it returns an awaitable that is "woken up" when the background thread completes the given assignment. As this version uses asyncio for IO and threads for parsing, it should run about as fast as your threaded version, but scale to a much larger number of parallel downloads.

Finally, if you want the parsing to run actually in parallel, using multiple cores, you can use multi-processing instead:

_pool = concurrent.futures.ProcessPoolExecutor()

async def fetch_and_parse(session, url):
    html = await fetch(session, url)
    loop = asyncio.get_event_loop()
    # run parse(html) in a separate process, and
    # resume this coroutine when it completes
    paras = await loop.run_in_executor(pool, parse, html)
    return paras
Run Code Online (Sandbox Code Playgroud)