Scrapy蜘蛛不起作用

Zey*_*nel -1 python scrapy

由于到目前为止没有任何工作,我开始了一个新的项目

python scrapy-ctl.py startproject Nu
Run Code Online (Sandbox Code Playgroud)

我完全按照教程,创建了文件夹和一个新的蜘蛛

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from scrapy.item import Item
from Nu.items import NuItem
from urls import u

class NuSpider(CrawlSpider):
    domain_name = "wcase"
    start_urls = ['http://www.whitecase.com/aabbas/']

    names = hxs.select('//td[@class="altRow"][1]/a/@href').re('/.a\w+')

    u = names.pop()

    rules = (Rule(SgmlLinkExtractor(allow=(u, )), callback='parse_item'),)

    def parse(self, response):
        self.log('Hi, this is an item page! %s' % response.url)

        hxs = HtmlXPathSelector(response)
        item = Item()
        item['school'] = hxs.select('//td[@class="mainColumnTDa"]').re('(?<=(JD,\s))(.*?)(\d+)')
        return item

SPIDER = NuSpider()
Run Code Online (Sandbox Code Playgroud)

当我跑

C:\Python26\Scripts\Nu>python scrapy-ctl.py crawl wcase
Run Code Online (Sandbox Code Playgroud)

我明白了

[Nu] ERROR: Could not find spider for domain: wcase
Run Code Online (Sandbox Code Playgroud)

至少其他蜘蛛是Scrapy认可的,这个不是.我究竟做错了什么?

谢谢你的帮助!

Gnu*_*eer 6

还请检查scrapy的版本.最新版本使用"name"而不是"domain_name"属性来唯一标识蜘蛛.