依次运行多个 Spider

Hua*_*Yin 6 python web-crawler scrapy scrapy-spider

Class Myspider1
#do something....

Class Myspider2
#do something...
Run Code Online (Sandbox Code Playgroud)

以上是我的spider.py文件的架构。我试图先运行 Myspider1,然后根据某些条件多次运行 Myspider2。我怎么能这样???有小费吗?

configure_logging()
runner = CrawlerRunner()
def crawl():
    yield runner.crawl(Myspider1,arg.....)
    yield runner.crawl(Myspider2,arg.....)
crawl()
reactor.run()
Run Code Online (Sandbox Code Playgroud)

我正在尝试使用这种方式。但不知道如何运行它。我应该在 cmd 上运行 cmd(什么命令?)还是只运行 python 文件?

多谢!!!

Qia*_*ang 5

您需要使用Deferredprocess.crawl() 返回的对象,它允许您在爬网完成时添加回调。

这是我的代码

def start_sequentially(process: CrawlerProcess, crawlers: list):
    print('start crawler {}'.format(crawlers[0].__name__))
    deferred = process.crawl(crawlers[0])
    if len(crawlers) > 1:
        deferred.addCallback(lambda _: start_sequentially(process, crawlers[1:]))

def main():
    crawlers = [Crawler1, Crawler2]
    process = CrawlerProcess(settings=get_project_settings())
    start_sequentially(process, crawlers)
    process.start()
Run Code Online (Sandbox Code Playgroud)


小智 4

运行 python 文件,
例如: test.py

import scrapy
from twisted.internet import reactor, defer
from scrapy.crawler import CrawlerRunner
from scrapy.utils.log import configure_logging

class MySpider1(scrapy.Spider):
    # Your first spider definition
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
               "http://www.dmoz.org/Computers/Programming/Languages/Python/Books/"
    ]

    def parse(self, response):
        print "first spider"

class MySpider2(scrapy.Spider):
    # Your second spider definition
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
                "http://www.dmoz.org/Computers/Programming/Languages/Python/Resources/"
    ]

    def parse(self, response):
        print "second spider"

configure_logging()
runner = CrawlerRunner()

@defer.inlineCallbacks
def crawl():
    yield runner.crawl(MySpider1)
    yield runner.crawl(MySpider2)
    reactor.stop()

crawl()
reactor.run() # the script will block here until the last crawl call is finished
Run Code Online (Sandbox Code Playgroud)

现在运行python test.py > output.txt
您可以从 output.txt 中观察到您的蜘蛛程序按顺序运行。