Dan*_*e B 49 python multithreading callback urlfetch python-multithreading
我花了一整天的时间在Python中寻找最简单的多线程URL提取器,但我发现的大多数脚本都使用队列或多处理或复杂的库.
最后我自己写了一个,我作为答案报告.请随时提出任何改进建议.
我想其他人可能一直在寻找类似的东西.
aba*_*ert 44
尽可能简化原始版本:
import threading
import urllib2
import time
start = time.time()
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
def fetch_url(url):
urlHandler = urllib2.urlopen(url)
html = urlHandler.read()
print "'%s\' fetched in %ss" % (url, (time.time() - start))
threads = [threading.Thread(target=fetch_url, args=(url,)) for url in urls]
for thread in threads:
thread.start()
for thread in threads:
thread.join()
print "Elapsed Time: %s" % (time.time() - start)
Run Code Online (Sandbox Code Playgroud)
这里唯一的新技巧是:
join已经告诉过你了.Thread子类,只需要一个target函数.jfs*_*jfs 29
multiprocessing 有一个不启动其他进程的线程池:
#!/usr/bin/env python
from multiprocessing.pool import ThreadPool
from time import time as timer
from urllib2 import urlopen
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
def fetch_url(url):
try:
response = urlopen(url)
return url, response.read(), None
except Exception as e:
return url, None, e
start = timer()
results = ThreadPool(20).imap_unordered(fetch_url, urls)
for url, html, error in results:
if error is None:
print("%r fetched in %ss" % (url, timer() - start))
else:
print("error fetching %r: %s" % (url, error))
print("Elapsed Time: %s" % (timer() - start,))
Run Code Online (Sandbox Code Playgroud)
与Thread基于解决方案相比的优势:
ThreadPool允许限制最大并发连接数(20在代码示例中)from urllib.request import urlopen在Python 3上).aba*_*ert 14
这个主要的例子可以concurrent.futures做你想要的一切,更简单.此外,它可以通过一次只执行5次来处理大量的URL,并且它可以更好地处理错误.
当然这个模块只是用Python 3.2或更高版本构建的......但是如果你使用的是2.5-3.1,你可以在futuresPyPI上安装backport .所有你需要从示例代码改变是搜索和替换concurrent.futures用futures,而且,对于2.x中,urllib.request有urllib2.
以下是反向移植到2.x的示例,修改为使用您的URL列表并添加时间:
import concurrent.futures
import urllib2
import time
start = time.time()
urls = ["http://www.google.com", "http://www.apple.com", "http://www.microsoft.com", "http://www.amazon.com", "http://www.facebook.com"]
# Retrieve a single page and report the url and contents
def load_url(url, timeout):
conn = urllib2.urlopen(url, timeout=timeout)
return conn.readall()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in urls}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print '%r generated an exception: %s' % (url, exc)
else:
print '"%s" fetched in %ss' % (url,(time.time() - start))
print "Elapsed Time: %ss" % (time.time() - start)
Run Code Online (Sandbox Code Playgroud)
但是你可以使这更简单.真的,你所需要的只是:
def load_url(url):
conn = urllib2.urlopen(url, timeout)
data = conn.readall()
print '"%s" fetched in %ss' % (url,(time.time() - start))
return data
with futures.ThreadPoolExecutor(max_workers=5) as executor:
pages = executor.map(load_url, urls)
print "Elapsed Time: %ss" % (time.time() - start)
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
43009 次 |
| 最近记录: |