我创建了一个蜘蛛,并将方法链接到spider_idle事件.
如何手动添加请求?我不能只从parse返回项目 - 在这种情况下解析没有运行,因为已经解析了所有已知的URL.我有一个生成新请求的方法,我想从spider_idle回调运行它来添加创建的请求.
class FooSpider(BaseSpider):
name = 'foo'
def __init__(self):
dispatcher.connect(self.dont_close_me, signals.spider_idle)
def dont_close_me(self, spider):
if spider != self:
return
# The engine instance will allow me to schedule requests, but
# how do I get the engine object?
engine = unknown_get_engine()
engine.schedule(self.create_request())
# afterward, ensure we stay alive by raising DontCloseSpider
raise DontCloseSpider("..I prefer live spiders.")
Run Code Online (Sandbox Code Playgroud)
更新: 我已经确定我可能需要这个ExecutionEngine对象,但我并不知道如何从蜘蛛中获取它,尽管它可以从一个Crawler实例获得.
更新2: ..谢谢...crawler作为超类的属性附加,所以我可以使用self.crawler而不需要额外的努力.>>
Ste*_*oth 21
class FooSpider(BaseSpider):
def __init__(self, *args, **kwargs):
super(FooSpider, self).__init__(*args, **kwargs)
dispatcher.connect(self.dont_close_me, signals.spider_idle)
def dont_close_me(self, spider):
if spider != self:
return
self.crawler.engine.crawl(self.create_request(), spider)
raise DontCloseSpider("..I prefer live spiders.")
Run Code Online (Sandbox Code Playgroud)
2016年更新:
class FooSpider(BaseSpider):
yet = False
@classmethod
def from_crawler(cls, crawler, *args, **kwargs):
from_crawler = super(FooSpider, cls).from_crawler
spider = from_crawler(crawler, *args, **kwargs)
crawler.signals.connect(spider.idle, signal=scrapy.signals.spider_idle)
return spider
def idle(self):
if not self.yet:
self.crawler.engine.crawl(self.create_request(), self)
self.yet = True
Run Code Online (Sandbox Code Playgroud)