用Scrapy和Selenium刮痧

puf*_*fin 6 python selenium scrapy

我有一个scrapy蜘蛛爬行网站,通过页面上的JavaScript重新加载内容.为了进入下一页抓取,我一直在使用Selenium点击网站顶部的月份链接.

问题在于,即使我的代码按预期移动每个链接,蜘蛛也会抓住月份的第一个月(Sept)数据并返回此重复数据.

我怎么能绕过这个?

from selenium import webdriver

class GigsInScotlandMain(InitSpider):
        name = 'gigsinscotlandmain'
        allowed_domains = ["gigsinscotland.com"]
        start_urls = ["http://www.gigsinscotland.com"]


    def __init__(self):
        InitSpider.__init__(self)
        self.br = webdriver.Firefox()

    def parse(self, response):
        hxs = HtmlXPathSelector(response)
        self.br.get(response.url)
        time.sleep(2.5)
        # Get the string for each month on the page.
        months = hxs.select("//ul[@id='gigsMonths']/li/a/text()").extract()

        for month in months:
            link = self.br.find_element_by_link_text(month)
            link.click()
            time.sleep(5)

            # Get all the divs containing info to be scraped.
            listitems = hxs.select("//div[@class='listItem']")
            for listitem in listitems:
                item = GigsInScotlandMainItem()
                item['artist'] = listitem.select("div[contains(@class, 'artistBlock')]/div[@class='artistdiv']/span[@class='artistname']/a/text()").extract()
                #
                # Get other data ...
                #
                yield item
Run Code Online (Sandbox Code Playgroud)

ale*_*cxe 6

问题是您正在重复使用HtmlXPathSelector为初始响应定义的内容.从selenium浏览器重新定义它source_code:

...
for month in months:
    link = self.br.find_element_by_link_text(month)
    link.click()
    time.sleep(5)

    hxs = HtmlXPathSelector(self.br.page_source)

    # Get all the divs containing info to be scraped.
    listitems = hxs.select("//div[@class='listItem']")
...
Run Code Online (Sandbox Code Playgroud)