使用 wget 递归下载

xra*_*alf 35 wget

我有以下 wget 命令的问题:

wget -nd -r -l 10 http://web.archive.org/web/20110726051510/http://feedparser.org/docs/

它应该递归下载原始网络上的所有链接文档,但它只下载两个文件(index.htmlrobots.txt)。

我怎样才能实现这个网站的递归下载?

Ulr*_*arz 44

wget默认情况下,爬行页面遵循robots.txt 标准,就像搜索引擎一样,而对于 archive.org,它不允许整个 /web/ 子目录。要覆盖,请使用-e robots=off,

wget -nd -r -l 10 -e robots=off http://web.archive.org/web/20110726051510/http://feedparser.org/docs/
Run Code Online (Sandbox Code Playgroud)


Nik*_*ley 16

$ wget --random-wait -r -p -e robots=off -U Mozilla \
    http://web.archive.org/web/20110726051510/http://feedparser.org/docs/
Run Code Online (Sandbox Code Playgroud)

递归下载 url 的内容。

--random-wait - wait between 0.5 to 1.5 seconds between requests.
-r - turn on recursive retrieving.
-e robots=off - ignore robots.txt.
-U Mozilla - set the "User-Agent" header to "Mozilla". Though a better choice is a real User-Agent like "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729)".
Run Code Online (Sandbox Code Playgroud)

其他一些有用的选项是:

--limit-rate=20k - limits download speed to 20kbps.
-o logfile.txt - log the downloads.
-l 0 - remove recursion depth (which is 5 by default).
--wait=1h - be sneaky, download one file every hour.
Run Code Online (Sandbox Code Playgroud)

  • `-l 0 - 删除递归深度(默认为 5)` +1 (2认同)