nul*_*ull 2 python iterable callback scrapy
任何人都可以解释scrapy如何调用并处理Request的回调函数结果?
我理解scrapy可以接受Object(Request,BaseItem,None)或Iterable对象的结果.例如:
1.返回对象(Request或BaseItem或None)
def parse(self, response):
...
return scrapy.Request(...)
Run Code Online (Sandbox Code Playgroud)
2.返回Irablerable对象
def parse(self, response):
...
for url in self.urls:
yield scrapy.Request(...)
Run Code Online (Sandbox Code Playgroud)
我认为他们在scrapy的代码中处理得像这样.
# Assumed process_callback_result is a function that called after
# a Request's callback function has been executed.
# The "result" parameter is the callback's returned value
def process_callback_result(self, result):
if isinstance(result, scrapy.Request):
self.process_request(result)
elif isinstance(result, scrapy.BaseItem):
self.process_item(result)
elif result is None:
pass
elif isinstance(result, collections.Iterable):
for obj in result:
self.process_callback_result(obj)
else:
# show error message
# ...
Run Code Online (Sandbox Code Playgroud)
我<PYTHON_HOME>/Lib/site-packages/scrapy/core/scraper.py在_process_spidermw_output函数中找到了相应的代码:
def _process_spidermw_output(self, output, request, response, spider):
"""Process each Request/Item (given in the output parameter) returned
from the given spider
"""
if isinstance(output, Request):
self.crawler.engine.crawl(request=output, spider=spider)
elif isinstance(output, BaseItem):
self.slot.itemproc_size += 1
dfd = self.itemproc.process_item(output, spider)
dfd.addBoth(self._itemproc_finished, output, response, spider)
return dfd
elif output is None:
pass
else:
typename = type(output).__name__
log.msg(format='Spider must return Request, BaseItem or None, '
'got %(typename)r in %(request)s',
level=log.ERROR, spider=spider, request=request, typename=typename)
Run Code Online (Sandbox Code Playgroud)
但我找不到elif isinstance(result, collections.Iterable):逻辑的一部分.
那是因为_process_spidermw_output它只是单个项目/对象的处理程序.它来自于scrapy.utils.defer.parallel.这是处理蜘蛛输出的函数:
def handle_spider_output(self, result, request, response, spider):
if not result:
return defer_succeed(None)
it = iter_errback(result, self.handle_spider_error, request, response, spider)
dfd = parallel(it, self.concurrent_items,
self._process_spidermw_output, request, response, spider)
return dfd
Run Code Online (Sandbox Code Playgroud)
资料来源:https://github.com/scrapy/scrapy/blob/master/scrapy/core/scraper.py#L163-L169
正如您所看到的,它调用parallel并赋予它_process_spidermw_output作为参数的句柄.参数名称是callable,并且为iterable包含蜘蛛结果的每个元素调用它.该parallel功能是:
def parallel(iterable, count, callable, *args, **named):
"""Execute a callable over the objects in the given iterable, in parallel,
using no more than ``count`` concurrent calls.
Taken from: http://jcalderone.livejournal.com/24285.html
"""
coop = task.Cooperator()
work = (callable(elem, *args, **named) for elem in iterable)
return defer.DeferredList([coop.coiterate(work) for i in xrange(count)])
Run Code Online (Sandbox Code Playgroud)
资料来源:https://github.com/scrapy/scrapy/blob/master/scrapy/utils/defer.py#L50-L58
基本上,过程是这样的:
当enqueue_scrape被调用时,它通过调用来添加request和.在随后由处理它调用.该函数定义了一个回调函数,它将处理来自迭代器的项.迭代器是在调用时创建的,当它调用函数时,它会将回调注册到:responseslot.queueslot.add_response_requestqueue_scrape_nextself._scrape_scrapehandle_spider_output_scrape2call_spiderscrapy.utils.spider.iterate_spider_output
def iterate_spider_output(result):
return [result] if isinstance(result, BaseItem) else arg_to_iter(result)
Run Code Online (Sandbox Code Playgroud)
最后,实际将单个项目None或迭代器转换为迭代器的函数是scrapy.utils.misc.arg_to_iter():
def arg_to_iter(arg):
"""Convert an argument to an iterable. The argument can be a None, single
value, or an iterable.
Exception: if arg is a dict, [arg] will be returned
"""
if arg is None:
return []
elif not isinstance(arg, _ITERABLE_SINGLE_VALUES) and hasattr(arg, '__iter__'):
return arg
else:
return [arg]
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
2064 次 |
| 最近记录: |