将Scrapy的输出格式化为XML

Nic*_*ung 5 python xml web-crawler scrapy web-scraping

因此,当我将其导出为XML时,我尝试将使用Scrapy从网站上抓取的数据导出为特定格式.

这就是我希望我的XML看起来像:

<?xml version="1.0" encoding="UTF-8"?>
<data>
  <row>
    <field1><![CDATA[Data Here]]></field1>
    <field2><![CDATA[Data Here]]></field2>
  </row>
</data>
Run Code Online (Sandbox Code Playgroud)

我使用命令运行我的scrape:

$ scrapy crawl my_scrap -o items.xml -t xml
Run Code Online (Sandbox Code Playgroud)

我得到的当前输出是:

<?xml version="1.0" encoding="utf-8"?>
<items><item><field1><value>Data Here</value></field1><field2><value>Data Here</value></field2></item>
Run Code Online (Sandbox Code Playgroud)

如您所见,它正在添加<value>字段,我无法重命名根节点或项节点.我知道我需要使用XmlItemExporter,但我不知道如何在我的项目中实现它.

我试图将它添加到这里pipelines.py显示,但我总是最终得到错误:

AttributeError: 'CrawlerProcess' object has no attribute 'signals'

有没有人知道在使用XmlItemExporter?将数据导出到XML时如何重新格式化数据的示例?

编辑:

在我的piplines.py模块中显示我的XmlItemExporter :

from scrapy import signals
from scrapy.contrib.exporter import XmlItemExporter

class XmlExportPipeline(object):

    def __init__(self):
        self.files = {}

    @classmethod
    def from_crawler(cls, crawler):
        pipeline = cls()
        crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
        crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
        return pipeline

    def spider_opened(self, spider):
        file = open('%s_products.xml' % spider.name, 'w+b')
        self.files[spider] = file
        self.exporter = XmlItemExporter(file)
        self.exporter.start_exporting()

    def spider_closed(self, spider):
        self.exporter.finish_exporting()
        file = self.files.pop(spider)
        file.close()

    def process_item(self, item, spider):
        self.exporter.export_item(item)
        return item
Run Code Online (Sandbox Code Playgroud)

编辑(显示修改和追溯):

我修改了这个spider_opened功能:

 def spider_opened(self, spider):
        file = open('%s_products.xml' % spider.name, 'w+b')
        self.files[spider] = file
        self.exporter = XmlItemExporter(file, 'data', 'row')
        self.exporter.start_exporting()   
Run Code Online (Sandbox Code Playgroud)

我得到的追溯是:

Traceback (most recent call last):
          File "/root/self_opportunity/venv/lib/python2.6/site-packages/twisted/internet/defer.py", line 551, in _runCallbacks
            current.result = callback(current.result, *args, **kw)
          File "/root/self_opportunity/venv/lib/python2.6/site-packages/scrapy/core/engine.py", line 265, in <lambda>
            spider=spider, reason=reason, spider_stats=self.crawler.stats.get_stats()))
          File "/root/self_opportunity/venv/lib/python2.6/site-packages/scrapy/signalmanager.py", line 23, in send_catch_log_deferred
            return signal.send_catch_log_deferred(*a, **kw)
          File "/root/self_opportunity/venv/lib/python2.6/site-packages/scrapy/utils/signal.py", line 53, in send_catch_log_deferred
            *arguments, **named)
        --- <exception caught here> ---
          File "/root/self_opportunity/venv/lib/python2.6/site-packages/twisted/internet/defer.py", line 134, in maybeDeferred
            result = f(*args, **kw)
          File "/root/self_opportunity/venv/lib/python2.6/site-packages/scrapy/xlib/pydispatch/robustapply.py", line 47, in robustApply
            return receiver(*arguments, **named)
          File "/root/self_opportunity/self_opportunity/pipelines.py", line 28, in spider_closed
            self.exporter.finish_exporting()
        exceptions.AttributeError: 'XmlExportPipeline' object has no attribute 'exporter'
Run Code Online (Sandbox Code Playgroud)

Fra*_*ila 6

你可以XmlItemExporter做你最想要的东西简单地通过提供你想要的节点的名称:

XmlItemExporter(file, 'data', 'row')
Run Code Online (Sandbox Code Playgroud)

请参阅文档.

value字段中元素的问题是因为这些字段不是标量值.如果XmlItemExporter遇到标量值,它只是输出<fieldname>data</fieldname>,但是如果遇到可迭代值,它将像这样序列化:<fieldname><value>data1</value><value>data2</value></fieldname>.解决方案是停止为您的项目发出非标量字段值.

如果您不愿意这样做,则子类化XmlItemExporter并覆盖其_export_xml_field方法,以便在项值可迭代时执行您想要的操作.这是代码,XmlItemExporter因此您可以看到实现.