我想在scrapy crawl ...命令行中传递一个参数,以便在扩展的CrawlSpider中的规则定义中使用,如下所示
name = 'example.com'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com']
rules = (
# Extract links matching 'category.php' (but not matching 'subsection.php')
# and follow links from them (since no callback means follow=True by default).
Rule(SgmlLinkExtractor(allow=('category\.php', ), deny=('subsection\.php', ))),
# Extract links matching 'item.php' and parse them with the spider's method parse_item
Rule(SgmlLinkExtractor(allow=('item\.php', )), callback='parse_item'),
)
Run Code Online (Sandbox Code Playgroud)
我希望在命令行参数中指定SgmlLinkExtractor中的allow属性.我用Google搜索并发现我可以在spider __init__方法中获取参数值,但是如何在命令行中获取参数以在Rule定义中使用?
您可以rules在__init__方法中构建Spider的属性,例如:
class MySpider(CrawlSpider):
name = 'example.com'
allowed_domains = ['example.com']
start_urls = ['http://www.example.com']
def __init__(self, allow=None, *args, **kwargs):
self.rules = (
Rule(SgmlLinkExtractor(allow=(self.allow,),)),
)
super(MySpider, self).__init__(*args, **kwargs)
Run Code Online (Sandbox Code Playgroud)
并allow在命令行上传递属性,如下所示:
scrapy crawl example.com -a allow="item\.php"
Run Code Online (Sandbox Code Playgroud)
| 归档时间: |
|
| 查看次数: |
859 次 |
| 最近记录: |