如何将两个用户定义的参数传递给爬虫蜘蛛

Kur*_*eek 3 python scrapy

如何在 scrapy spider 中传递用户定义的参数之后,我编写了以下简单的蜘蛛:

import scrapy

class Funda1Spider(scrapy.Spider):
    name = "funda1"
    allowed_domains = ["funda.nl"]

    def __init__(self, place='amsterdam'):
        self.start_urls = ["http://www.funda.nl/koop/%s/" % place]

    def parse(self, response):
        filename = response.url.split("/")[-2] + '.html'
        with open(filename, 'wb') as f:
            f.write(response.body)
Run Code Online (Sandbox Code Playgroud)

这似乎有效;例如,如果我从命令行运行它

scrapy crawl funda1 -a place=rotterdam
Run Code Online (Sandbox Code Playgroud)

它生成一个rotterdam.html类似于http://www.funda.nl/koop/rotterdam/ 的文件。接下来我想扩展它,以便可以指定一个子页面,例如http://www.funda.nl/koop/rotterdam/p2/。我尝试了以下方法:

import scrapy

class Funda1Spider(scrapy.Spider):
    name = "funda1"
    allowed_domains = ["funda.nl"]

    def __init__(self, place='amsterdam', page=''):
        self.start_urls = ["http://www.funda.nl/koop/%s/p%s/" % (place, page)]

    def parse(self, response):
        filename = response.url.split("/")[-2] + '.html'
        with open(filename, 'wb') as f:
            f.write(response.body)
Run Code Online (Sandbox Code Playgroud)

但是,如果我尝试运行它

scrapy crawl funda1 -a place=rotterdam page=2
Run Code Online (Sandbox Code Playgroud)

我收到以下错误:

crawl: error: running 'scrapy crawl' with more than one spider is no longer supported
Run Code Online (Sandbox Code Playgroud)

我不太明白这个错误信息,因为我不是试图抓取两个蜘蛛,而是试图传递两个关键字参数来修改start_urls. 我怎么能让这个工作?

Gra*_*rus 5

当提供多个参数时,您需要-a每个参数添加前缀。

您的情况的正确行是:

scrapy crawl funda1 -a place=rotterdam -a page=2