忽略scrapy中已经访问过的网址

bla*_*mba 5 python scrapy

这是我的custom_filters.py文件:

from scrapy.dupefilter import RFPDupeFilter

class SeenURLFilter(RFPDupeFilter):

    def __init__(self, path=None):
        self.urls_seen = set()
        RFPDupeFilter.__init__(self, path)

    def request_seen(self, request):
        if request.url in self.urls_seen:
           return True
        else:
           self.urls_seen.add(request.url)
Run Code Online (Sandbox Code Playgroud)

添加了以下行:

   DUPEFILTER_CLASS = 'crawl_website.custom_filters.SeenURLFilter'
Run Code Online (Sandbox Code Playgroud)

到settings.py

当我检查生成的csv文件时,它会多次显示一个URL.这是错的吗?

mat*_*tes 2

来自: http: //doc.scrapy.org/en/latest/topics/item-pipeline.html#duplicates-filter

from scrapy.exceptions import DropItem

class DuplicatesPipeline(object):

    def __init__(self):
        self.ids_seen = set()

    def process_item(self, item, spider):
        if item['id'] in self.ids_seen:
            raise DropItem("Duplicate item found: %s" % item)
        else:
            self.ids_seen.add(item['id'])
            return item
Run Code Online (Sandbox Code Playgroud)

然后在你的settings.py添加中:

ITEM_PIPELINES = {
  'your_bot_name.pipelines.DuplicatesPipeline': 100
}
Run Code Online (Sandbox Code Playgroud)

编辑:

要检查重复的 URL,请使用:

from scrapy.exceptions import DropItem

class DuplicatesPipeline(object):
    def __init__(self):
        self.urls_seen = set()

    def process_item(self, item, spider):
        if item['url'] in self.urls_seen:
            raise DropItem("Duplicate item found: %s" % item)
        else:
            self.urls_seen.add(item['url'])
            return item
Run Code Online (Sandbox Code Playgroud)

这需要url = Field()在您的项目中。像这样的东西(items.py):

from scrapy.item import Item, Field

class PageItem(Item):
    url            = Field()
    scraped_field_a = Field()
    scraped_field_b = Field()
Run Code Online (Sandbox Code Playgroud)

  • 这只在创建结果文件(即 csv 文件)时检查重复记录。这不会检查蜘蛛内部是否有重复的 URL 获取。 (2认同)