sbo*_*oss 14 python web-crawler scrapy scrape scrapy-spider
我想使用Scrapy从给定的网站获取所有外部链接.使用以下代码,蜘蛛也会抓取外部链接:
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LinkExtractor
from myproject.items import someItem
class someSpider(CrawlSpider):
name = 'crawltest'
allowed_domains = ['someurl.com']
start_urls = ['http://www.someurl.com/']
rules = (Rule (LinkExtractor(), callback="parse_obj", follow=True),
)
def parse_obj(self,response):
item = someItem()
item['url'] = response.url
return item
Run Code Online (Sandbox Code Playgroud)
我错过了什么?"allowed_domains"是否阻止外部链接被抓取?如果我为LinkExtractor设置"allow_domains",它不会提取外部链接.只是为了澄清:我不想抓取内部链接,但提取外部链接.任何帮助appriciated!
12R*_*n12 13
在解析每个页面后,您还可以使用链接提取器来提取所有链接.
链接提取器将为您过滤链接.在此示例中,链接提取器将拒绝允许域中的链接,因此它仅获取外部链接.
from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors import LxmlLinkExtractor
from myproject.items import someItem
class someSpider(CrawlSpider):
name = 'crawltest'
allowed_domains = ['someurl.com']
start_urls = ['http://www.someurl.com/']
rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_obj', follow=True),)
def parse_obj(self,response):
for link in LxmlLinkExtractor(allow=(),deny = self.allowed_domains).extract_links(response):
item = someItem()
item['url'] = link.url
Run Code Online (Sandbox Code Playgroud)
根据12Ryan12的答案的更新代码,
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors.lxmlhtml import LxmlLinkExtractor
from scrapy.item import Item, Field
class MyItem(Item):
url= Field()
class someSpider(CrawlSpider):
name = 'crawltest'
allowed_domains = ['someurl.com']
start_urls = ['http://www.someurl.com/']
rules = (Rule(LxmlLinkExtractor(allow=()), callback='parse_obj', follow=True),)
def parse_obj(self,response):
item = MyItem()
item['url'] = []
for link in LxmlLinkExtractor(allow=(),deny = self.allowed_domains).extract_links(response):
item['url'].append(link.url)
return item
Run Code Online (Sandbox Code Playgroud)