如何获取scrapy失败的URL?

Joe*_* Wu 39 python report scrapy web-scraping

我是scrapy的新手,它是我知道的惊人的爬虫框架!

在我的项目中,我发送了超过90,000个请求,但其中一些请求失败了.我将日志级别设置为INFO,我只能看到一些统计信息,但没有详细信息.

2012-12-05 21:03:04+0800 [pd_spider] INFO: Dumping spider stats:
{'downloader/exception_count': 1,
 'downloader/exception_type_count/twisted.internet.error.ConnectionDone': 1,
 'downloader/request_bytes': 46282582,
 'downloader/request_count': 92383,
 'downloader/request_method_count/GET': 92383,
 'downloader/response_bytes': 123766459,
 'downloader/response_count': 92382,
 'downloader/response_status_count/200': 92382,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2012, 12, 5, 13, 3, 4, 836000),
 'item_scraped_count': 46191,
 'request_depth_max': 1,
 'scheduler/memory_enqueued': 92383,
 'start_time': datetime.datetime(2012, 12, 5, 12, 23, 25, 427000)}
Run Code Online (Sandbox Code Playgroud)

有没有办法获得更多细节报告?例如,显示那些失败的URL.谢谢!

Tal*_*lin 49

是的,这是可能的.

我在我的蜘蛛类中添加了一个failed_urls列表,并且如果响应的状态为404,则会向其添加url(这将需要扩展以涵盖其他错误状态).

然后我添加了一个把列表连接成一个字符串的句柄,并在蜘蛛关闭时将其添加到统计数据中.

根据您的评论,可以跟踪扭曲的错误.

from scrapy.spider import BaseSpider
from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals

class MySpider(BaseSpider):
    handle_httpstatus_list = [404] 
    name = "myspider"
    allowed_domains = ["example.com"]
    start_urls = [
        'http://www.example.com/thisurlexists.html',
        'http://www.example.com/thisurldoesnotexist.html',
        'http://www.example.com/neitherdoesthisone.html'
    ]

    def __init__(self, category=None):
        self.failed_urls = []

    def parse(self, response):
        if response.status == 404:
            self.crawler.stats.inc_value('failed_url_count')
            self.failed_urls.append(response.url)

    def handle_spider_closed(spider, reason):
        self.crawler.stats.set_value('failed_urls', ','.join(spider.failed_urls))

    def process_exception(self, response, exception, spider):
        ex_class = "%s.%s" % (exception.__class__.__module__, exception.__class__.__name__)
        self.crawler.stats.inc_value('downloader/exception_count', spider=spider)
        self.crawler.stats.inc_value('downloader/exception_type_count/%s' % ex_class, spider=spider)

    dispatcher.connect(handle_spider_closed, signals.spider_closed)
Run Code Online (Sandbox Code Playgroud)

输出(下载器/ exception_count*统计信息仅在实际抛出异常时出现 - 我通过在关闭无线适配器后尝试运行蜘蛛来模拟它们):

2012-12-10 11:15:26+0000 [myspider] INFO: Dumping Scrapy stats:
    {'downloader/exception_count': 15,
     'downloader/exception_type_count/twisted.internet.error.DNSLookupError': 15,
     'downloader/request_bytes': 717,
     'downloader/request_count': 3,
     'downloader/request_method_count/GET': 3,
     'downloader/response_bytes': 15209,
     'downloader/response_count': 3,
     'downloader/response_status_count/200': 1,
     'downloader/response_status_count/404': 2,
     'failed_url_count': 2,
     'failed_urls': 'http://www.example.com/thisurldoesnotexist.html, http://www.example.com/neitherdoesthisone.html'
     'finish_reason': 'finished',
     'finish_time': datetime.datetime(2012, 12, 10, 11, 15, 26, 874000),
     'log_count/DEBUG': 9,
     'log_count/ERROR': 2,
     'log_count/INFO': 4,
     'response_received_count': 3,
     'scheduler/dequeued': 3,
     'scheduler/dequeued/memory': 3,
     'scheduler/enqueued': 3,
     'scheduler/enqueued/memory': 3,
     'spider_exceptions/NameError': 2,
     'start_time': datetime.datetime(2012, 12, 10, 11, 15, 26, 560000)}
Run Code Online (Sandbox Code Playgroud)

  • 这不再有效.`exceptions.NameError:全局名称'self'未定义`发生错误.`BaseSpider`现在只是`Spider` http://doc.scrapy.org/en/0.24/news.html?highlight=basespider#id2 https://github.com/scrapy/dirbot/blob/master/dirbot/ spiders/dmoz.py但我找不到修复程序让你的代码工作了@Talvalin. (2认同)

ale*_*cxe 15

这是另一个如何处理和收集404错误的例子(检查github帮助页面):

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.item import Item, Field


class GitHubLinkItem(Item):
    url = Field()
    referer = Field()
    status = Field()


class GithubHelpSpider(CrawlSpider):
    name = "github_help"
    allowed_domains = ["help.github.com"]
    start_urls = ["https://help.github.com", ]
    handle_httpstatus_list = [404]
    rules = (Rule(SgmlLinkExtractor(), callback='parse_item', follow=True),)

    def parse_item(self, response):
        if response.status == 404:
            item = GitHubLinkItem()
            item['url'] = response.url
            item['referer'] = response.request.headers.get('Referer')
            item['status'] = response.status

            return item
Run Code Online (Sandbox Code Playgroud)

只要运行scrapy runspider-o output.json和看到的项目的列表output.json文件.


msc*_*arf 13

来自@Talvalin和@alecxe的答案对我有很大帮助,但它们似乎没有捕获不生成响应对象的下载事件(例如,twisted.internet.error.TimeoutErrortwisted.web.http.PotentialDataLoss).这些错误在运行结束时显示在stats转储中,但没有任何元信息.

正如我在这里发现的那样,stats.py中间件跟踪错误,在DownloaderStats类' process_exception方法中捕获,特别是在ex_class变量中,根据需要递增每个错误类型,然后在运行结束时转储计数.

要将此类错误与来自相应请求对象的信息进行匹配,您可以为每个请求添加唯一ID(via request.meta),然后将其拉入以下process_exception方法stats.py:

self.stats.set_value('downloader/my_errs/{0}'.format(request.meta), ex_class)
Run Code Online (Sandbox Code Playgroud)

这将为每个基于下载程序的错误生成唯一的字符串,而不会附带响应.然后,您可以将更改保存stats.py为其他内容(例如my_stats.py),将其添加到下载器中间件(具有正确的优先级),并禁用该库存stats.py:

DOWNLOADER_MIDDLEWARES = {
    'myproject.my_stats.MyDownloaderStats': 850,
    'scrapy.downloadermiddleware.stats.DownloaderStats': None,
    }
Run Code Online (Sandbox Code Playgroud)

在运行结束时的输出如下所示(这里使用,其中每个请求URL映射到一个GROUP_ID和member_id由斜线分开,像元信息'0/14'):

{'downloader/exception_count': 3,
 'downloader/exception_type_count/twisted.web.http.PotentialDataLoss': 3,
 'downloader/my_errs/0/1': 'twisted.web.http.PotentialDataLoss',
 'downloader/my_errs/0/38': 'twisted.web.http.PotentialDataLoss',
 'downloader/my_errs/0/86': 'twisted.web.http.PotentialDataLoss',
 'downloader/request_bytes': 47583,
 'downloader/request_count': 133,
 'downloader/request_method_count/GET': 133,
 'downloader/response_bytes': 3416996,
 'downloader/response_count': 130,
 'downloader/response_status_count/200': 95,
 'downloader/response_status_count/301': 24,
 'downloader/response_status_count/302': 8,
 'downloader/response_status_count/500': 3,
 'finish_reason': 'finished'....}
Run Code Online (Sandbox Code Playgroud)

此答案处理非基于下载程序的错误.


Pyt*_*uru 11

Scrapy默认忽略404并且不解析.要处理404错误,请执行此操作.这很容易,如果你得到错误代码404作为响应,你可以用非常简单的方式处理这个......在设置写入

HTTPERROR_ALLOWED_CODES = [404,403]
Run Code Online (Sandbox Code Playgroud)

然后在您的解析函数中处理响应状态代码.

def parse(self,response):
    if response.status == 404:
        #your action on error
Run Code Online (Sandbox Code Playgroud)

在设置中并在解析函数中获得响应


Lou*_*uis 5

作为scrapy 0.24.6中,所建议的方法alecxe不会赶上与起始网址错误.要记录您需要覆盖的起始URL的错误parse_start_urls.为此目的调整alexce的答案,你会得到:

from scrapy.selector import HtmlXPathSelector
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.item import Item, Field

class GitHubLinkItem(Item):
    url = Field()
    referer = Field()
    status = Field()

class GithubHelpSpider(CrawlSpider):
    name = "github_help"
    allowed_domains = ["help.github.com"]
    start_urls = ["https://help.github.com", ]
    handle_httpstatus_list = [404]
    rules = (Rule(SgmlLinkExtractor(), callback='parse_item', follow=True),)

    def parse_start_url(self, response):
        return self.handle_response(response)

    def parse_item(self, response):
        return self.handle_response(response)

    def handle_response(self, response):
        if response.status == 404:
            item = GitHubLinkItem()
            item['url'] = response.url
            item['referer'] = response.request.headers.get('Referer')
            item['status'] = response.status

            return item
Run Code Online (Sandbox Code Playgroud)


小智 5

这是这个问题的更新.我遇到了类似的问题,需要使用scrapy信号来调用我的管道中的函数.我已经编辑了@ Talvalin的代码,但是为了更清晰一点,我想回答一下.

基本上,您应该将self添加为handle_spider_closed的参数.您还应该在init中调用调度程序,以便可以将spider实例(self)传递给handleing方法.

from scrapy.spider import Spider
from scrapy.xlib.pydispatch import dispatcher
from scrapy import signals

class MySpider(Spider):
    handle_httpstatus_list = [404] 
    name = "myspider"
    allowed_domains = ["example.com"]
    start_urls = [
        'http://www.example.com/thisurlexists.html',
        'http://www.example.com/thisurldoesnotexist.html',
        'http://www.example.com/neitherdoesthisone.html'
    ]

    def __init__(self, category=None):
        self.failed_urls = []
        # the dispatcher is now called in init
        dispatcher.connect(self.handle_spider_closed,signals.spider_closed) 


    def parse(self, response):
        if response.status == 404:
            self.crawler.stats.inc_value('failed_url_count')
            self.failed_urls.append(response.url)

    def handle_spider_closed(self, spider, reason): # added self 
        self.crawler.stats.set_value('failed_urls',','.join(spider.failed_urls))

    def process_exception(self, response, exception, spider):
        ex_class = "%s.%s" % (exception.__class__.__module__,  exception.__class__.__name__)
        self.crawler.stats.inc_value('downloader/exception_count', spider=spider)
        self.crawler.stats.inc_value('downloader/exception_type_count/%s' % ex_class, spider=spider)
Run Code Online (Sandbox Code Playgroud)

我希望这有助于将来遇到同样问题的任何人.