Ada*_*m F 20 python web-crawler scrapy
我想使用Python Scrapy模块从我的网站上抓取所有URL并将列表写入文件.我查看了示例,但没有看到任何简单的示例来执行此操作.
Ada*_*m F 45
这是对我有用的python程序:
from scrapy.selector import HtmlXPathSelector
from scrapy.spider import BaseSpider
from scrapy.http import Request
DOMAIN = 'example.com'
URL = 'http://%s' % DOMAIN
class MySpider(BaseSpider):
name = DOMAIN
allowed_domains = [DOMAIN]
start_urls = [
URL
]
def parse(self, response):
hxs = HtmlXPathSelector(response)
for url in hxs.select('//a/@href').extract():
if not ( url.startswith('http://') or url.startswith('https://') ):
url= URL + url
print url
yield Request(url, callback=self.parse)
Run Code Online (Sandbox Code Playgroud)
将其保存在一个名为的文件中spider.py
.
然后,您可以使用shell管道发布此文本:
bash$ scrapy runspider spider.py > urls.out
bash$ cat urls.out| grep 'example.com' |sort |uniq |grep -v '#' |grep -v 'mailto' > example.urls
Run Code Online (Sandbox Code Playgroud)
这为我提供了我网站中所有唯一网址的列表.
eLR*_*uLL 13
更清洁(也许更有用)的东西就是使用LinkExtractor
from scrapy.linkextractors import LinkExtractor
def parse(self, response):
le = LinkExtractor() # empty for getting everything, check different options on documentation
for link in le.extract_links(response):
yield Request(link.url, callback=self.parse)
Run Code Online (Sandbox Code Playgroud)