THI*_*ELP 7 python scrapy web-scraping scrapy-spider
我一直在研究一个scrapy web scraper,它从一个开始URL抓取所有内部链接,只收集外部链接scrapy.但是,我的主要问题是对外部链接和内部链接进行分类.例如,当我尝试过滤外部链接时link.startswith("http") or link.startswith("ftp") or link.startswith("www"),如果网站用绝对路径(www.my-domain.com/about而不是/about)链接自己的网站,那么它会将其归类为外部链接,即使它不是.以下是我的代码:
import scrapy
from lab_relationship.items import Links
class WebSpider(scrapy.Spider):
name = "web"
allowed_domains = ["my-domain.com"]
start_urls = (
'www.my-domain.com',
)
def parse(self, response):
""" finds all external links"""
items = []
for link in set(response.xpath('//a/@href').extract()):
item = Links()
if len(link) > 1:
if link.startswith("/") or link.startswith("."):
# internal link
url = response.urljoin(link)
item['internal'] = url
#yield scrapy.Request(url, self.parse)
elif link.startswith("http") or link.startswith("ftp") or link.startswith("www"):
# external link
item['external'] = link
else:
# misc. links: mailto, id (#)
item['misc'] = link
items.append(item)
return items
Run Code Online (Sandbox Code Playgroud)
有什么建议?
使用链接提取器.
实例化时,您必须通过允许的域.你不必担心指定所需的标签,如(按文档)的参数tags取('a', 'area')默认.
在Rust lang网站的例子中,打印来自其域的所有内部链接的代码如下所示:
import scrapy
from scrapy.linkextractors import LinkExtractor
class RustSpider(scrapy.Spider):
name = "rust"
allowed_domains = ["www.rust-lang.org"]
start_urls = (
'http://www.rust-lang.org/',
)
def parse(self, response):
extractor = LinkExtractor(allow_domains='rust-lang.org')
links = extractor.extract_links(response)
for link in links:
print link.url
Run Code Online (Sandbox Code Playgroud)
输出将是这样的链接的列表:https://doc.rust-lang.org/nightly/reference.html(我不能发布更多),同时排除像StackOverflow那样的所有链接.
请务必查看文档页面,因为链接提取器有许多您可能需要的参数.