官方文档提供了许多scrapy从代码运行爬虫的方法:
import scrapy
from scrapy.crawler import CrawlerProcess
class MySpider(scrapy.Spider):
# Your spider definition
...
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)'
})
process.crawl(MySpider)
process.start() # the script will block here until the crawling is finished
Run Code Online (Sandbox Code Playgroud)
但它们都阻止脚本,直到爬行完成。python中以非阻塞、异步方式运行爬虫的最简单方法是什么?
我尝试了我能找到的所有解决方案,唯一对我有用的是这个。但为了让它工作,scrapy 1.1rc1我不得不稍微调整一下:
from scrapy.crawler import Crawler
from scrapy import signals
from scrapy.utils.project import get_project_settings
from twisted.internet import reactor
from billiard import Process
class CrawlerScript(Process):
def __init__(self, spider):
Process.__init__(self)
settings = get_project_settings()
self.crawler = Crawler(spider.__class__, settings)
self.crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
self.spider = spider
def run(self):
self.crawler.crawl(self.spider)
reactor.run()
def crawl_async():
spider = MySpider()
crawler = CrawlerScript(spider)
crawler.start()
crawler.join()
Run Code Online (Sandbox Code Playgroud)
所以现在当我调用 时crawl_async,它开始爬行并且不会阻塞我当前的线程。我对 完全陌生scrapy,所以这可能不是一个很好的解决方案,但它对我有用。
我使用了这些版本的库:
cffi==1.5.0
Scrapy==1.1rc1
Twisted==15.5.0
billiard==3.3.0.22
Run Code Online (Sandbox Code Playgroud)