Urllib2和BeautifulSoup:好夫妻但是太慢了 - urllib3&threads?

Car*_*to_ 6 python performance multithreading urllib2 beautifulsoup

当我听到有关线程和urllib3的一些好消息时,我正在寻找一种优化代码的方法.显然,人们不同意哪种解决方案是最好的.

下面我的脚本的问题是执行时间:太慢了!

第1步:我获取此页面 http://www.cambridgeesol.org/institutions/results.php?region=Afghanistan&type=&BULATS=on

第2步:我使用BeautifulSoup解析页面

第3步:我将数据放在excel doc中

第4步:我再次,我的名单中的所有国家(大名单)再次这样做(我只是将网址中的"阿富汗"改为另一个国家)

这是我的代码:

ws = wb.add_sheet("BULATS_IA") #We add a new tab in the excel doc
    x = 0 # We need x and y for pulling the data into the excel doc
    y = 0
    Countries_List = ['Afghanistan','Albania','Andorra','Argentina','Armenia','Australia','Austria','Azerbaijan','Bahrain','Bangladesh','Belgium','Belize','Bolivia','Bosnia and Herzegovina','Brazil','Brunei Darussalam','Bulgaria','Cameroon','Canada','Central African Republic','Chile','China','Colombia','Costa Rica','Croatia','Cuba','Cyprus','Czech Republic','Denmark','Dominican Republic','Ecuador','Egypt','Eritrea','Estonia','Ethiopia','Faroe Islands','Fiji','Finland','France','French Polynesia','Georgia','Germany','Gibraltar','Greece','Grenada','Hong Kong','Hungary','Iceland','India','Indonesia','Iran','Iraq','Ireland','Israel','Italy','Jamaica','Japan','Jordan','Kazakhstan','Kenya','Kuwait','Latvia','Lebanon','Libya','Liechtenstein','Lithuania','Luxembourg','Macau','Macedonia','Malaysia','Maldives','Malta','Mexico','Monaco','Montenegro','Morocco','Mozambique','Myanmar (Burma)','Nepal','Netherlands','New Caledonia','New Zealand','Nigeria','Norway','Oman','Pakistan','Palestine','Papua New Guinea','Paraguay','Peru','Philippines','Poland','Portugal','Qatar','Romania','Russia','Saudi Arabia','Serbia','Singapore','Slovakia','Slovenia','South Africa','South Korea','Spain','Sri Lanka','Sweden','Switzerland','Syria','Taiwan','Thailand','Trinadad and Tobago','Tunisia','Turkey','Ukraine','United Arab Emirates','United Kingdom','United States','Uruguay','Uzbekistan','Venezuela','Vietnam']
    Longueur = len(Countries_List)



    for Countries in Countries_List:
        y = 0

        htmlSource = urllib.urlopen("http://www.cambridgeesol.org/institutions/results.php?region=%s&type=&BULATS=on" % (Countries)).read() # I am opening the page with the name of the correspondant country in the url
        s = soup(htmlSource)
        tableGood = s.findAll('table')
        try:
            rows = tableGood[3].findAll('tr')
            for tr in rows:
                cols = tr.findAll('td')
                y = 0
                x = x + 1
                for td in cols:
                    hum =  td.text
                    ws.write(x,y,hum)
                    y = y + 1
                    wb.save("%s.xls" % name_excel)

        except (IndexError):
            pass
Run Code Online (Sandbox Code Playgroud)

所以我知道一切都不完美,但我期待着用Python学习新东西!脚本非常慢,因为urllib2并不那么快,而且还有BeautifulSoup.对于汤的事情,我想我不能真正做到更好,但对于urllib2,我没有.

编辑1: 多处理与urllib2无关? 在我的案例中似乎很有趣.你们对这个潜在的解决方案有何看法?

# Make sure that the queue is thread-safe!!

def producer(self):
    # Only need one producer, although you could have multiple
    with fh = open('urllist.txt', 'r'):
        for line in fh:
            self.queue.enqueue(line.strip())

def consumer(self):
    # Fire up N of these babies for some speed
    while True:
        url = self.queue.dequeue()
        dh = urllib2.urlopen(url)
        with fh = open('/dev/null', 'w'): # gotta put it somewhere
            fh.write(dh.read())
Run Code Online (Sandbox Code Playgroud)

编辑2:URLLIB3任何人都可以告诉我更多相关信息吗?

为多个请求重用相同的套接字连接(HTTPConnectionPool和HTTPSConnectionPool)(使用可选的客户端证书验证). https://github.com/shazow/urllib3

至于我为不同的页面请求122次相同的网站,我想重用相同的套接字连接可能很有趣,我错了吗?不能更快吗?...

http = urllib3.PoolManager()
r = http.request('GET', 'http://www.bulats.org')
for Pages in Pages_List:
    r = http.request('GET', 'http://www.bulats.org/agents/find-an-agent?field_continent_tid=All&field_country_tid=All&page=%s' % (Pages))
    s = soup(r.data)
Run Code Online (Sandbox Code Playgroud)

sha*_*zow 9

考虑使用像workerpool这样的东西.参考Mass Downloader示例,结合urllib3看起来像:

import workerpool
import urllib3

URL_LIST = [] # Fill this from somewhere

NUM_SOCKETS = 3
NUM_WORKERS = 5

# We want a few more workers than sockets so that they have extra
# time to parse things and such.

http = urllib3.PoolManager(maxsize=NUM_SOCKETS)
workers = workerpool.WorkerPool(size=NUM_WORKERS)

class MyJob(workerpool.Job):
    def __init__(self, url):
       self.url = url

    def run(self):
        r = http.request('GET', self.url)
        # ... do parsing stuff here


for url in URL_LIST:
    workers.put(MyJob(url))

# Send shutdown jobs to all threads, and wait until all the jobs have been completed
# (If you don't do this, the script might hang due to a rogue undead thread.)
workers.shutdown()
workers.wait()
Run Code Online (Sandbox Code Playgroud)

您可以从Mass Downloader示例中注意到,有多种方法可以执行此操作.我选择这个特殊的例子只是因为它不那么神奇,但任何其他策略都是有效的.

免责声明:我是urllib3和workerpool的作者.