小编FMo*_*ara的帖子

在scrapy的start_urls列表中给出的每个url的单独输出文件

我想为我在蜘蛛的start_urls中设置的每个url创建单独的输出文件,或者以某种方式想要分割输出文件启动URL明智.

以下是我的蜘蛛的start_urls

start_urls = ['http://www.dmoz.org/Arts/', 'http://www.dmoz.org/Business/', 'http://www.dmoz.org/Computers/']
Run Code Online (Sandbox Code Playgroud)

我想创建单独的输出文件

Arts.xml
Business.xml
Computers.xml

我不知道该怎么做.我想通过在项目管道类的spider_opened方法中实现一些类似的东西来实现这一点,

import re
from scrapy import signals
from scrapy.contrib.exporter import XmlItemExporter

class CleanDataPipeline(object):
    def __init__(self):
        self.cnt = 0
        self.filename = ''

    @classmethod
    def from_crawler(cls, crawler):
        pipeline = cls()
        crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
        crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
        return pipeline

    def spider_opened(self, spider):
        referer_url = response.request.headers.get('referer', None)
        if referer_url in spider.start_urls:
            catname = re.search(r'/(.*)$', referer_url, re.I)
            self.filename = catname.group(1)

        file = open('output/' + str(self.cnt) + '_' + self.filename + '.xml', 'w+b')
        self.exporter = XmlItemExporter(file)
        self.exporter.start_exporting() …
Run Code Online (Sandbox Code Playgroud)

python scrapy web-scraping scrapy-spider

8
推荐指数
1
解决办法
2312
查看次数

标签 统计

python ×1

scrapy ×1

scrapy-spider ×1

web-scraping ×1