Scrapy蜘蛛没有收到spider_idle信号

Aim*_*Hat 4 python web-crawler scrapy web-scraping scrapy-spider

我有一个处理链中请求的蜘蛛,meta用于产生具有来自多个请求的数据的项目。我用来生成请求的方式是在第一次调用 parse 函数时启动所有请求,但是,如果我有太多链接要请求,则不会安排所有请求,并且最终我没有得到我需要的一切。

为了解决这个问题,我试图让蜘蛛一次请求 5 个产品,当蜘蛛空闲时再次请求(通过在 中连接信号from_crawler)。问题是,由于我的代码现在是,spider_idle 不运行该request函数并且蜘蛛立即关闭。就好像蜘蛛没有闲着一样。

这是一些代码:

class ProductSpider(scrapy.Spider):
    def __init__(self, *args, **kwargs):
        super(ProductSpider, self).__init__(*args, **kwargs)
        self.parsed_data = []
        self.header = {}
        f = open('file.csv', 'r')
        f_data = [[x.strip()] for x in f]
        count=1
        first = 'smth'
        for product in f_data:
            if first != '':
                header = product[0].split(';')
                for each in range(len(header[1:])):
                    self.header[header[each+1]] = each+1
                first = ''
            else:
                product = product[0].split(';')
                product.append(count)
                count+=1
                self.parsed_data.append(product)
        f.close()

    @classmethod
    def from_crawler(cls, crawler, *args, **kwargs):
        spider = super(ProductSpider, cls).from_crawler(crawler, *args, **kwargs)
        crawler.signals.connect(spider.request, signal=signals.spider_idle)
        return spider

    name = 'products'
    allowed_domains = [domains]
    handle_httpstatus_list = [400, 404, 403, 503, 504]

    start_urls = [start]

    def next_link(self,response):
        product = response.meta['product']
        there_is_next = False
        for each in range(response.meta['each']+1, len(product)-1):
            if product[each] != '':
                there_is_next = True
                yield scrapy.Request(product[each], callback=response.meta['func_dict'][each], meta={'func_dict': response.meta['func_dict'],'product':product,'each':each,'price_dict':response.meta['price_dict'], 'item':response.meta['item']}, dont_filter=True)
                break
        if not there_is_next:
            item = response.meta['item']
            item['prices'] = response.meta['price_dict']
            yield item

    #[...] chain parsing functions for each request

    def get_products(self):
        products = []
        data = self.parsed_data

        for each in range(5):
            if data:
                products.append(data.pop())
        return products

    def request(self):
        item = Header()
        item['first'] = True
        item['sellers'] = self.header
        yield item

        func_dict = {parsing_functions_for_every_site}

        products = self.get_products()
        if not products:
            return

        for product in products:

            item = Product()

            price_dict = {1:product[1]}
            item['name'] = product[0]
            item['order'] = product[-1]

            for each in range(2, len(product)-1):
                if product[each] != '':
                    #print each, func_dict, product[each]
                    yield scrapy.Request(product[each], callback=func_dict[each], 
                    meta={'func_dict': func_dict,'product':product,
                    'each':each,'price_dict':price_dict, 'item':item})
                    break

        raise DontCloseSpider

 def parse(self, response=None):
        pass
Run Code Online (Sandbox Code Playgroud)

eLR*_*uLL 5

我假设你已经证明你的request方法正在被达到,而实际的问题是这个方法没有产生请求(甚至是项目)。

这是在 Scrapy 中处理信号时的常见错误,因为相关的方法不能产生项目/请求。绕过这个的方法是使用

请求:

request = Request('myurl', callback=self.method_to_parse)
self.crawler.engine.crawl(
    request,
    spider
)
Run Code Online (Sandbox Code Playgroud)

对于项目:

item = MyItem()
self.crawler.engine.scraper._process_spidermw_output(
    item, 
    None, 
    Response(''), 
    spider,
)
Run Code Online (Sandbox Code Playgroud)

此外,spider_idle信号方法需要接收spider参数,所以在你的情况下它应该是这样的:

def request(self, spider):
    ...
Run Code Online (Sandbox Code Playgroud)

它应该可以工作,但我会推荐一个更好的方法名称。