如何使用scrapy从主脚本中获取剪贴项?

KyL*_*KyL 5 python scrapy

我希望在主脚本中获得已刮除项目的列表,而不是使用scrapy shell。

我知道我定义的parse类中有一个方法FooSpider,该方法返回的列表Item。Scrapy框架调用此方法。但是,我如何才能自己获得此返回列表?

我发现了很多关于此的帖子,但我不明白他们在说什么。

作为上下文,我将官方示例代码放在这里

import scrapy

from tutorial.items import DmozItem

class DmozSpider(scrapy.Spider):
    name = "dmoz"
    allowed_domains = ["dmoz.org"]
    start_urls = [
        "http://www.dmoz.org/Computers/Programming/Languages/Python/",
    ]

    def parse(self, response):
        for href in response.css("ul.directory.dir-col > li > a::attr('href')"):
            url = response.urljoin(href.extract())
            yield scrapy.Request(url, callback=self.parse_dir_contents)

    def parse_dir_contents(self, response):
        result = []
        for sel in response.xpath('//ul/li'):
            item = DmozItem()
            item['title'] = sel.xpath('a/text()').extract()
            item['link'] = sel.xpath('a/@href').extract()
            item['desc'] = sel.xpath('text()').extract()
            result.append(item)

        return result
Run Code Online (Sandbox Code Playgroud)

我如何result从像main.py或这样的主要python脚本中返回run.py

if __name__ == "__main__":
    ...
    result = xxxx()
    for item in result:
        print item
Run Code Online (Sandbox Code Playgroud)

谁能提供一个代码片段,让我list从某个地方返回它?

非常感谢你!

Fra*_*uss 7

这是一个如何使用管道收集列表中的所有项目的示例:

#!/usr/bin/python3

# Scrapy API imports
import scrapy
from scrapy.crawler import CrawlerProcess

# your spider
from FollowAllSpider import FollowAllSpider

# list to collect all items
items = []

# pipeline to fill the items list
class ItemCollectorPipeline(object):
    def __init__(self):
        self.ids_seen = set()

    def process_item(self, item, spider):
        items.append(item)

# create a crawler process with the specified settings
process = CrawlerProcess({
    'USER_AGENT': 'scrapy',
    'LOG_LEVEL': 'INFO',
    'ITEM_PIPELINES': { '__main__.ItemCollectorPipeline': 100 }
})

# start the spider
process.crawl(FollowAllSpider)
process.start()

# print the items
for item in items:
    print("url: " + item['url'])
Run Code Online (Sandbox Code Playgroud)

你可以FollowAllSpider这里获取,或者使用你自己的蜘蛛。与我的网页一起使用时的示例输出:

$ ./crawler.py 
2018-09-16 22:28:09 [scrapy.utils.log] INFO: Scrapy 1.5.1 started (bot: scrapybot)
2018-09-16 22:28:09 [scrapy.utils.log] INFO: Versions: lxml 3.7.1.0, libxml2 2.9.4, cssselect 1.0.3, parsel 1.5.0, w3lib 1.19.0, Twisted 18.7.0, Python 3.5.3 (default, Jan 19 2017, 14:11:04) - [GCC 6.3.0 20170118], pyOpenSSL 16.2.0 (OpenSSL 1.1.0f  25 May 2017), cryptography 1.7.1, Platform Linux-4.9.0-6-amd64-x86_64-with-debian-9.5
2018-09-16 22:28:09 [scrapy.crawler] INFO: Overridden settings: {'USER_AGENT': 'scrapy', 'LOG_LEVEL': 'INFO'}
[...]
2018-09-16 22:28:15 [scrapy.core.engine] INFO: Spider closed (finished)
url: http://www.frank-buss.de/
url: http://www.frank-buss.de/impressum.html
url: http://www.frank-buss.de/spline.html
url: http://www.frank-buss.de/schnecke/index.html
url: http://www.frank-buss.de/solitaire/index.html
url: http://www.frank-buss.de/forth/index.html
url: http://www.frank-buss.de/pi.tex
[...]
Run Code Online (Sandbox Code Playgroud)


小智 1

如果您想要使用/处理/转换或存储项目,您应该查看项目管道,通常的 scrapy 抓取就可以解决问题。