6 python screen-scraping web-crawler scrapy
我有一个CrawlSpider设置为遵循某些链接并刮取一个新闻杂志,其中每个问题的链接遵循以下URL方案:
http://example.com/YYYY/DDDD/index.htm其中YYYY是年份,DDDD是三位或四位数的发行号.
我只想要问题928以及以下规则.我没有任何问题连接到网站,抓取链接或提取项目(所以我没有包括我的其余代码).蜘蛛似乎决心遵循非允许的链接.它试图抓住问题377,398等,并遵循"culture.htm"和"feature.htm"链接.这会引发很多错误并且不是非常重要,但它需要大量清理数据.对于出了什么问题的任何建议?
class crawlerNameSpider(CrawlSpider):
name = 'crawler'
allowed_domains = ["example.com"]
start_urls = ["http://example.com/issues.htm"]
rules = (
Rule(SgmlLinkExtractor(allow = ('\d\d\d\d/(92[8-9]|9[3-9][0-9]|\d\d\d\d)/index\.htm', )), follow = True),
Rule(SgmlLinkExtractor(allow = ('fr[0-9].htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('eg[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('ec[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('op[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('sc[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('re[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(allow = ('in[0-9]*.htm', )), callback = 'parse_item'),
Rule(SgmlLinkExtractor(deny = ('culture.htm', )), ),
Rule(SgmlLinkExtractor(deny = ('feature.htm', )), ),
)
Run Code Online (Sandbox Code Playgroud)
编辑:我使用一个更简单的正则表达式2009年,2010年,2011年修复此问题,但我仍然很好奇为什么如果有人有任何建议上述不起作用.
您需要传递收集链接的deny
参数.如果他们调用一个函数,你就不需要创建那么多.我会把你的代码写成:SgmlLinkExtractor
follow
Rule
parse_item
rules = (
Rule(SgmlLinkExtractor(
allow = ('\d\d\d\d/(92[8-9]|9[3-9][0-9]|\d\d\d\d)/index\.htm', ),
deny = ('culture\.htm', 'feature\.htm'),
),
follow = True
),
Rule(SgmlLinkExtractor(
allow = (
'fr[0-9].htm',
'eg[0-9]*.htm',
'ec[0-9]*.htm',
'op[0-9]*.htm',
'sc[0-9]*.htm',
're[0-9]*.htm',
'in[0-9]*.htm',
)
),
callback = 'parse_item',
),
)
Run Code Online (Sandbox Code Playgroud)
如果它是你正在使用的规则中的真实网址模式parse_item
,它可以简化为:
Rule(SgmlLinkExtractor(
allow = ('(fr|eg|ec|op|sc|re|in)[0-9]*\.htm', ),
callback = 'parse_item',
),
)
Run Code Online (Sandbox Code Playgroud)
归档时间: |
|
查看次数: |
3693 次 |
最近记录: |