scrapy中的动态start_urls

lea*_*yUI 12 web-crawler scrapy

我正在使用scrapy来抓取网站上的多个页面.该变量start_urls用于定义要爬网的页面.我最初会从第一页开始,因此start_urls = [1st page]在文件中定义example_spider.py

从第一页获得更多信息后,我将确定要抓取的下一页是什么,然后start_urls相应地进行分配.因此,我必须覆盖上面的example_spider.py并进行更改start_urls = [1st page, 2nd page, ..., Kth page],然后再次运行scrapy crawl.

这是最好的方法还是有更好的方法来动态分配start_urls使用scrapy API而不必覆盖example_splider.py?谢谢.

war*_*iuc 22

start_urlsclass属性包含start urls - 仅此而已.如果你已经提取了你想要抓取的其他页面的网址 - parse通过[另一个]回调从回调相应的请求中获得:

class Spider(BaseSpider):

    name = 'my_spider'
    start_urls = [
                'http://www.domain.com/'
    ]
    allowed_domains = ['domain.com']

    def parse(self, response):
        '''Parse main page and extract categories links.'''
        hxs = HtmlXPathSelector(response)
        urls = hxs.select("//*[@id='tSubmenuContent']/a[position()>1]/@href").extract()
        for url in urls:
            url = urlparse.urljoin(response.url, url)
            self.log('Found category url: %s' % url)
            yield Request(url, callback = self.parseCategory)

    def parseCategory(self, response):
        '''Parse category page and extract links of the items.'''
        hxs = HtmlXPathSelector(response)
        links = hxs.select("//*[@id='_list']//td[@class='tListDesc']/a/@href").extract()
        for link in links:
            itemLink = urlparse.urljoin(response.url, link)
            self.log('Found item link: %s' % itemLink, log.DEBUG)
            yield Request(itemLink, callback = self.parseItem)

    def parseItem(self, response):
        ...
Run Code Online (Sandbox Code Playgroud)

如果您仍想自定义创建启动请求,请覆盖方法BaseSpider.start_requests()

  • @WilliamKinaan [`来自scrapy.http import Request`](http://doc.scrapy.org/en/latest/topics/request-response.html#request-objects) (3认同)