Eug*_*rny 19 python twisted scrapy
我按照本指南http://doc.scrapy.org/en/0.16/topics/practices.html#run-scrapy-from-a-script从我的脚本运行scrapy.这是我的脚本的一部分:
crawler = Crawler(Settings(settings))
crawler.configure()
spider = crawler.spiders.create(spider_name)
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run()
print "It can't be printed out!"
Run Code Online (Sandbox Code Playgroud)
它应该工作:访问页面,刮取所需信息并存储我告诉它的输出json(通过FEED_URI).但是当蜘蛛完成他的工作时(我可以通过输出json中的数字看到它)我的脚本执行不会恢复.可能它不是scrapy问题.并且应该在扭曲的反应堆中找到答案.我怎么能释放线程执行?
Ste*_*oth 28
当蜘蛛完成时,你需要停止反应堆.你可以通过听取spider_closed信号来完成这个:
from twisted.internet import reactor
from scrapy import log, signals
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy.xlib.pydispatch import dispatcher
from testspiders.spiders.followall import FollowAllSpider
def stop_reactor():
reactor.stop()
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = FollowAllSpider(domain='scrapinghub.com')
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
log.msg('Running reactor...')
reactor.run() # the script will block here until the spider is closed
log.msg('Reactor stopped.')
Run Code Online (Sandbox Code Playgroud)
命令行日志输出可能如下所示:
stav@maia:/srv/scrapy/testspiders$ ./api
2013-02-10 14:49:38-0600 [scrapy] INFO: Running reactor...
2013-02-10 14:49:47-0600 [followall] INFO: Closing spider (finished)
2013-02-10 14:49:47-0600 [followall] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 23934,...}
2013-02-10 14:49:47-0600 [followall] INFO: Spider closed (finished)
2013-02-10 14:49:47-0600 [scrapy] INFO: Reactor stopped.
stav@maia:/srv/scrapy/testspiders$
Run Code Online (Sandbox Code Playgroud)
在scrapy 0.19.x中你应该这样做:
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from testspiders.spiders.followall import FollowAllSpider
from scrapy.utils.project import get_project_settings
spider = FollowAllSpider(domain='scrapinghub.com')
settings = get_project_settings()
crawler = Crawler(settings)
crawler.signals.connect(reactor.stop, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start()
reactor.run() # the script will block here until the spider_closed signal was sent
Run Code Online (Sandbox Code Playgroud)
请注意这些行
settings = get_project_settings()
crawler = Crawler(settings)
Run Code Online (Sandbox Code Playgroud)
如果没有它,您的蜘蛛将不会使用您的设置,也不会保存项目.花了一些时间来弄清楚为什么文档中的示例没有保存我的项目.我发送了一个pull请求来修复doc示例.
另一种方法是直接从您的脚本调用命令
from scrapy import cmdline
cmdline.execute("scrapy crawl followall".split()) #followall is the spider's name
Run Code Online (Sandbox Code Playgroud)