小编San*_*esh的帖子

如何在Scrapy中创建基于href的LinkExtractor规则

我正在尝试用Scrapy(scrapy.org)创建简单的爬虫.根据例子item.php是允许的.如何编写允许始终http://example.com/category/GET参数开头的url的规则page应该与其他参数一起使用任意数量的数字.这些参数的顺序是随机的.请帮忙我怎么写这样的规则?

几个有效值是:

以下是代码:

import scrapy
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
class MySpider(CrawlSpider):
name = 'example.com'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com/category/']

rules = (
    Rule(LinkExtractor(allow=('item\.php', )), callback='parse_item'),
)

def parse_item(self, response):
    item = scrapy.Item()
    item['id'] = response.xpath('//td[@id="item_id"]/text()').re(r'ID: (\d+)')
    item['name'] = response.xpath('//td[@id="item_name"]/text()').extract()
    item['description'] = response.xpath('//td[@id="item_description"]/text()').extract()
    return item
Run Code Online (Sandbox Code Playgroud)

python regex scrapy web-scraping

5
推荐指数
1
解决办法
7442
查看次数

标签 统计

python ×1

regex ×1

scrapy ×1

web-scraping ×1