是否有可能从Scrapy蜘蛛中运行另一只蜘蛛?

esf*_*sfy 7 python multiprocessing scrapy

现在我有2只蜘蛛,我想做的是

  1. 蜘蛛1url1,如果url2出现,请致电蜘蛛2url2.还url1使用管道保存内容.
  2. 蜘蛛2url2做某事.

由于两只蜘蛛的复杂性,我希望将它们分开.

我尝试过使用的scrapy crawl:

def parse(self, response):
    p = multiprocessing.Process(
        target=self.testfunc())
    p.join()
    p.start()

def testfunc(self):
    settings = get_project_settings()
    crawler = CrawlerRunner(settings)
    crawler.crawl(<spidername>, <arguments>)
Run Code Online (Sandbox Code Playgroud)

它确实加载了设置但没有抓取:

2015-08-24 14:13:32 [scrapy] INFO: Enabled extensions: CloseSpider, LogStats, CoreStats, SpiderState
2015-08-24 14:13:32 [scrapy] INFO: Enabled downloader middlewares: DownloadTimeoutMiddleware, UserAgentMiddleware, RetryMiddleware, HttpAuthMiddleware, DefaultHeadersMiddleware, MetaRefreshMiddleware, HttpCompressionMiddleware, RedirectMiddleware, CookiesMiddleware, ChunkedTransferMiddleware, DownloaderStats
2015-08-24 14:13:32 [scrapy] INFO: Enabled spider middlewares: HttpErrorMiddleware, OffsiteMiddleware, RefererMiddleware, UrlLengthMiddleware, DepthMiddleware
2015-08-24 14:13:32 [scrapy] INFO: Spider opened
2015-08-24 14:13:32 [scrapy] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
Run Code Online (Sandbox Code Playgroud)

这些文档有一个关于从脚本启动的例子,但我正在尝试做的是在使用scrapy crawl命令时启动另一个蜘蛛.

编辑:完整代码

from scrapy.crawler import CrawlerRunner
from scrapy.utils.project import get_project_settings
from twisted.internet import reactor
from multiprocessing import Process
import scrapy
import os


def info(title):
    print(title)
    print('module name:', __name__)
    if hasattr(os, 'getppid'):  # only available on Unix
        print('parent process:', os.getppid())
    print('process id:', os.getpid())


class TestSpider1(scrapy.Spider):
    name = "test1"
    start_urls = ['http://www.google.com']

    def parse(self, response):
        info('parse')
        a = MyClass()
        a.start_work()


class MyClass(object):

    def start_work(self):
        info('start_work')
        p = Process(target=self.do_work)
        p.start()
        p.join()

    def do_work(self):

        info('do_work')
        settings = get_project_settings()
        runner = CrawlerRunner(settings)
        runner.crawl(TestSpider2)
        d = runner.join()
        d.addBoth(lambda _: reactor.stop())
        reactor.run()
        return

class TestSpider2(scrapy.Spider):

    name = "test2"
    start_urls = ['http://www.google.com']

    def parse(self, response):
        info('testspider2')
        return
Run Code Online (Sandbox Code Playgroud)

我希望是这样的:

  1. scrapy crawl test1(例如,当response.status_code为200时:)
  2. 在test1中,调用 scrapy crawl test2

scr*_*tso 8

我不会深入讨论,因为这个问题确实很老了,但我会继续从官方 Scrappy 文档中删除这段代码......你非常接近!哈哈

import scrapy
from scrapy.crawler import CrawlerProcess

class MySpider1(scrapy.Spider):
    # Your first spider definition
    ...

class MySpider2(scrapy.Spider):
    # Your second spider definition
    ...

process = CrawlerProcess()
process.crawl(MySpider1)
process.crawl(MySpider2)
process.start() # the script will block here until all crawling jobs are finished
Run Code Online (Sandbox Code Playgroud)

https://doc.scrapy.org/en/latest/topics/practices.html

然后使用回调,您可以在蜘蛛之间传递项目,我们是否逻辑函数您正在谈论