jkd*_*une 6 python export twisted scrapy python-2.7
当我从命令行运行它时,我的刮刀工作正常,但是当我尝试在python脚本中运行它时(使用Twisted 这里概述的方法)它不会输出它通常执行的两个CSV文件.我有一个管道来创建和填充这些文件,其中一个使用CsvItemExporter(),另一个使用writeCsvFile().这是代码:
class CsvExportPipeline(object):
def __init__(self):
self.files = {}
@classmethod
def from_crawler(cls, crawler):
pipeline = cls()
crawler.signals.connect(pipeline.spider_opened, signals.spider_opened)
crawler.signals.connect(pipeline.spider_closed, signals.spider_closed)
return pipeline
def spider_opened(self, spider):
nodes = open('%s_nodes.csv' % spider.name, 'w+b')
self.files[spider] = nodes
self.exporter1 = CsvItemExporter(nodes, fields_to_export=['url','name','screenshot'])
self.exporter1.start_exporting()
self.edges = []
self.edges.append(['Source','Target','Type','ID','Label','Weight'])
self.num = 1
def spider_closed(self, spider):
self.exporter1.finish_exporting()
file = self.files.pop(spider)
file.close()
writeCsvFile(getcwd()+r'\edges.csv', self.edges)
def process_item(self, item, spider):
self.exporter1.export_item(item)
for url in item['links']:
self.edges.append([item['url'],url,'Directed',self.num,'',1])
self.num += 1
return item
Run Code Online (Sandbox Code Playgroud)
这是我的文件结构:
SiteCrawler/ # the CSVs are normally created in this folder
runspider.py # this is the script that runs the scraper
scrapy.cfg
SiteCrawler/
__init__.py
items.py
pipelines.py
screenshooter.py
settings.py
spiders/
__init__.py
myfuncs.py
sitecrawler_spider.py
Run Code Online (Sandbox Code Playgroud)
刮刀似乎在所有其他方面正常运行.命令行末尾的输出表明爬行了预期的页数,蜘蛛似乎已正常完成.我没有收到任何错误消息.
---- 编辑: ----
将打印语句和语法错误插入管道中没有任何效果,因此似乎忽略了管道.为什么会这样?
以下是运行scraper(runspider.py)的脚本的代码:
from twisted.internet import reactor
from scrapy import log, signals
from scrapy.crawler import Crawler
from scrapy.settings import Settings
from scrapy.xlib.pydispatch import dispatcher
import logging
from SiteCrawler.spiders.sitecrawler_spider import MySpider
def stop_reactor():
reactor.stop()
dispatcher.connect(stop_reactor, signal=signals.spider_closed)
spider = MySpider()
crawler = Crawler(Settings())
crawler.configure()
crawler.crawl(spider)
crawler.start()
log.start(loglevel=logging.DEBUG)
log.msg('Running reactor...')
reactor.run() # the script will block here until the spider is closed
log.msg('Reactor stopped.')
Run Code Online (Sandbox Code Playgroud)
将“from scrapy.settings import Settings”替换为“from scrapy.utils.project import get_project_settings as Settings”修复了问题。
解决方案在这里找到了。没有提供解决方案的解释。
alecxe提供了如何从 Python 脚本内部运行 Scrapy 的示例。
编辑:
更详细地阅读了 alecxe 的帖子后,我现在可以看到“from scrapy.settings import Settings”和“from scrapy.utils.project import get_project_settings as Settings”之间的区别。后者允许您使用项目的设置文件,而不是默认设置文件。阅读 alecxe 的帖子(链接到上面)了解更多详细信息。
| 归档时间: |
|
| 查看次数: |
1963 次 |
| 最近记录: |