par*_*rik 1 python selenium middleware scrapy web-scraping
我使用python Selenium和Scrapy来抓取一个网站.
但我的剧本很慢,
Crawled 1 pages (at 1 pages/min)
Run Code Online (Sandbox Code Playgroud)
我使用CSS SELECTOR而不是XPATH来优化时间.我改变了中间件
'tutorial.middlewares.MyCustomDownloaderMiddleware': 543,
Run Code Online (Sandbox Code Playgroud)
是Selenium太慢还是我应该在Setting中改变一些东西?
我的代码:
def start_requests(self):
yield Request(self.start_urls, callback=self.parse)
def parse(self, response):
display = Display(visible=0, size=(800, 600))
display.start()
driver = webdriver.Firefox()
driver.get("http://www.example.com")
inputElement = driver.find_element_by_name("OneLineCustomerAddress")
inputElement.send_keys("75018")
inputElement.submit()
catNums = driver.find_elements_by_css_selector("html body div#page div#main.content div#sContener div#menuV div#mvNav nav div.mvNav.bcU div.mvNavLk form.jsExpSCCategories ul.mvSrcLk li")
#INIT
driver.find_element_by_css_selector(".mvSrcLk>li:nth-child(1)>label.mvNavSel.mvNavLvl1").click()
for catNumber in xrange(1,len(catNums)+1):
print "\n IN catnumber \n"
driver.find_element_by_css_selector("ul#catMenu.mvSrcLk> li:nth-child(%s)> label.mvNavLvl1" % catNumber).click()
time.sleep(5)
self.parse_articles(driver)
pages = driver.find_elements_by_xpath('//*[@class="pg"]/ul/li[last()]/a')
if(pages):
page = driver.find_element_by_xpath('//*[@class="pg"]/ul/li[last()]/a')
checkText = (page.text).strip()
if(len(checkText) > 0):
pageNums = int(page.text)
pageNums = pageNums - 1
for pageNumbers in range (pageNums):
WebDriverWait(driver, 10).until(EC.presence_of_element_located((By.ID, "waitingOverlay")))
driver.find_element_by_css_selector('.jsNxtPage.pgNext').click()
self.parse_articles(driver)
time.sleep(5)
def parse_articles(self,driver) :
test = driver.find_elements_by_css_selector('html body div#page div#main.content div#sContener div#sContent div#lpContent.jsTab ul#lpBloc li div.prdtBloc p.prdtBDesc strong.prdtBCat')
def between(self, value, a, b):
pos_a = value.find(a)
if pos_a == -1: return ""
pos_b = value.rfind(b)
if pos_b == -1: return ""
adjusted_pos_a = pos_a + len(a)
if adjusted_pos_a >= pos_b: return ""
return value[adjusted_pos_a:pos_b]
Run Code Online (Sandbox Code Playgroud)
所以你的代码在这里几乎没有什么缺陷.
通过使用scrapy可以非常雄辩地解决这个问题Downloader middlewares!您想创建一个自定义下载器中间件,它可以使用selenium而不是scrapy下载器下载请求.
例如,我使用这个:
# middlewares.py
class SeleniumDownloader(object):
def create_driver(self):
"""only start the driver if middleware is ever called"""
if not getattr(self, 'driver', None):
self.driver = webdriver.Chrome()
def process_request(self, request, spider):
# this is called for every request, but we don't want to render
# every request in selenium, so use meta key for those we do want.
if not request.meta.get('selenium', False):
return request
self.create_driver()
self.driver.get(request.url)
return HtmlResponse(request.url, body=self.driver.page_source, encoding='utf-8')
Run Code Online (Sandbox Code Playgroud)
激活您的中间件:
# settings.py
DOWNLOADER_MIDDLEWARES = {
'myproject.middleware.SeleniumDownloader': 13,
}
Run Code Online (Sandbox Code Playgroud)
然后在您的蜘蛛中,您可以通过添加元参数来指定通过selenium驱动程序下载哪些URL.
# you can start with selenium
def start_requests(self):
for url in self.start_urls:
yield scrapy.Request(url, meta={'selenium': True})
def parse(self, response):
# this response is rendered by selenium!
# also can use no selenium for another response if you wish
url = response.xpath("//a/@href")
yield scrapy.Request(url)
Run Code Online (Sandbox Code Playgroud)
这种方法的优点是你的驱动程序只启动一次并且只用于下载页面源,剩下的就是适当的异步scrapy工具.
缺点是您不能单击按钮等,因为您没有暴露给驱动程序.大多数情况下,您可以通过网络检查器对按钮进行反向工程,您永远不需要使用驱动程序本身进行任何单击操作.
| 归档时间: |
|
| 查看次数: |
1633 次 |
| 最近记录: |