Was*_*lil 21 python json scrapy web-scraping scrapy-spider
我scrapy在python脚本中运行
def setup_crawler(domain):
    dispatcher.connect(stop_reactor, signal=signals.spider_closed)
    spider = ArgosSpider(domain=domain)
    settings = get_project_settings()
    crawler = Crawler(settings)
    crawler.configure()
    crawler.crawl(spider)
    crawler.start()
    reactor.run()
它成功运行并停止但结果在哪里?我希望结果采用json格式,我该怎么做?
result = responseInJSON
就像我们使用命令一样
scrapy crawl argos -o result.json -t json
ale*_*cxe 24
您需要手动设置FEED_FORMAT和FEED_URI设置:
settings.overrides['FEED_FORMAT'] = 'json'
settings.overrides['FEED_URI'] = 'result.json'
如果要将结果输入变量,可以定义一个Pipeline将项目收集到列表中的类.使用spider_closed信号处理程序查看结果:
import json
from twisted.internet import reactor
from scrapy.crawler import Crawler
from scrapy import log, signals
from scrapy.utils.project import get_project_settings
class MyPipeline(object):
    def process_item(self, item, spider):
        results.append(dict(item))
results = []
def spider_closed(spider):
    print results
# set up spider    
spider = TestSpider(domain='mydomain.org')
# set up settings
settings = get_project_settings()
settings.overrides['ITEM_PIPELINES'] = {'__main__.MyPipeline': 1}
# set up crawler
crawler = Crawler(settings)
crawler.signals.connect(spider_closed, signal=signals.spider_closed)
crawler.configure()
crawler.crawl(spider)
# start crawling
crawler.start()
log.start()
reactor.run() 
仅供参考,看看Scrapy如何解析命令行参数.
另请参阅:在Python中的同一进程中捕获stdout.
Alv*_*nti 14
我设法让它通过添加简单的工作,FEED_FORMAT并FEED_URI在CrawlerProcess构造函数,使用基本Scrapy API教程代码如下:
process = CrawlerProcess({
'USER_AGENT': 'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)',
'FEED_FORMAT': 'json',
'FEED_URI': 'result.json'
})
简单!
from scrapy import cmdline
cmdline.execute("scrapy crawl argos -o result.json -t json".split())
把那个脚本放到你放的地方 scrapy.cfg
| 归档时间: | 
 | 
| 查看次数: | 18817 次 | 
| 最近记录: |