一个非常简单的多线程并行URL提取(没有队列)

Dan*_*e B 49 python multithreading callback urlfetch python-multithreading

我花了一整天的时间在Python中寻找最简单的多线程URL提取器,但我发现的大多数脚本都使用队列或多处理或复杂的库.

最后我自己写了一个,我作为答案报告.请随时提出任何改进建议.

我想其他人可能一直在寻找类似的东西.

aba*_*ert 44

尽可能简化原始版本:

import threading
import urllib2
import time

start = time.time()
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]

def fetch_url(url):
    urlHandler = urllib2.urlopen(url)
    html = urlHandler.read()
    print "'%s\' fetched in %ss" % (url, (time.time() - start))

threads = [threading.Thread(target=fetch_url, args=(url,)) for url in urls]
for thread in threads:
    thread.start()
for thread in threads:
    thread.join()

print "Elapsed Time: %s" % (time.time() - start)
Run Code Online (Sandbox Code Playgroud)

这里唯一的新技巧是:

  • 跟踪您创建的主题.
  • 如果你只是想知道它们什么时候完成,不要打扰线程计数器; join已经告诉过你了.
  • 如果您不需要任何状态或外部API,则不需要Thread子类,只需要一个target函数.

  • 我确保声称这是“尽可能”简化的,因为这是确保聪明人出现并找到进一步简化它的方法的最佳方法,只是为了让我看起来很傻。:) (3认同)

jfs*_*jfs 29

multiprocessing 有一个不启动其他进程的线程池:

#!/usr/bin/env python
from multiprocessing.pool import ThreadPool
from time import time as timer
from urllib2 import urlopen

urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]

def fetch_url(url):
    try:
        response = urlopen(url)
        return url, response.read(), None
    except Exception as e:
        return url, None, e

start = timer()
results = ThreadPool(20).imap_unordered(fetch_url, urls)
for url, html, error in results:
    if error is None:
        print("%r fetched in %ss" % (url, timer() - start))
    else:
        print("error fetching %r: %s" % (url, error))
print("Elapsed Time: %s" % (timer() - start,))
Run Code Online (Sandbox Code Playgroud)

Thread基于解决方案相比的优势:

  • ThreadPool允许限制最大并发连接数(20在代码示例中)
  • 输出没有乱码,因为所有输出都在主线程中
  • 记录错误
  • 代码适用于Python 2和3,无需更改(假设from urllib.request import urlopen在Python 3上).

  • 到目前为止,这是最好,最快和最简单的方法。我一直在尝试使用python 2和python 3进行扭曲,刮擦和其他操作,这更简单,更好 (2认同)

aba*_*ert 14

这个主要的例子可以concurrent.futures做你想要的一切,更简单.此外,它可以通过一次只执行5次来处理大量的URL,并且它可以更好地处理错误.

当然这个模块只是用Python 3.2或更高版本构建的......但是如果你使用的是2.5-3.1,你可以在futuresPyPI上安装backport .所有你需要从示例代码改变是搜索和替换concurrent.futuresfutures,而且,对于2.x中,urllib.requesturllib2.

以下是反向移植到2.x的示例,修改为使用您的URL列表并添加时间:

import concurrent.futures
import urllib2
import time

start = time.time()
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]

# Retrieve a single page and report the url and contents
def load_url(url, timeout):
    conn = urllib2.urlopen(url, timeout=timeout)
    return conn.readall()

# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
    # Start the load operations and mark each future with its URL
    future_to_url = {executor.submit(load_url, url, 60): url for url in urls}
    for future in concurrent.futures.as_completed(future_to_url):
        url = future_to_url[future]
        try:
            data = future.result()
        except Exception as exc:
            print '%r generated an exception: %s' % (url, exc)
        else:
            print '"%s" fetched in %ss' % (url,(time.time() - start))
print "Elapsed Time: %ss" % (time.time() - start)
Run Code Online (Sandbox Code Playgroud)

但是你可以使这更简单.真的,你所需要的只是:

def load_url(url):
    conn = urllib2.urlopen(url, timeout)
    data = conn.readall()
    print '"%s" fetched in %ss' % (url,(time.time() - start))
    return data

with futures.ThreadPoolExecutor(max_workers=5) as executor:
    pages = executor.map(load_url, urls)

print "Elapsed Time: %ss" % (time.time() - start)
Run Code Online (Sandbox Code Playgroud)