下载网页的工作本地副本

bra*_*ahn 199 wget download offline-browsing

我想下载一个网页的本地副本,并获取所有的CSS,图像,JavaScript等.

在之前的讨论中(例如,这里这里,两者都超过两年),通常提出两个建议:wget -phttrack.但是,这些建议都失败了.我非常感谢使用这些工具中的任何一个来完成任务; 替代品也很可爱.


选项1: wget -p

wget -p成功下载所有网页的先决条件(css,images,js).但是,当我在Web浏览器中加载本地副本时,该页面无法加载先决条件,因为尚未从Web上的版本修改这些先决条件的路径.

例如:

  • 在页面的html中,<link rel="stylesheet href="/stylesheets/foo.css" />需要更正指向新的相对路径foo.css
  • 在css文件中,background-image: url(/images/bar.png)同样需要进行调整.

有没有办法修改,wget -p以便路径正确?


选项2:httrack

httrack对于镜像整个网站来说,它似乎是一个很棒的工具,但我不清楚如何使用它来创建单个页面的本地副本.httrack论坛中有很多关于这个主题的讨论(例如这里),但似乎没有人有防弹解决方案.


选项3:另一个工具?

有些人建议使用付费工具,但我无法相信那里没有免费的解决方案.

非常感谢!

ser*_*erk 251

wget能够做你想要的.试试以下内容:

wget -p -k http://www.example.com/
Run Code Online (Sandbox Code Playgroud)

-p会得到你所需的所有元素,以正确地查看网站(CSS,图像等).该-k会更改所有链接(包括那些对CSS和图像),让你为它出现在网上浏览网页离线.

来自Wget文档:

‘-k’
‘--convert-links’
After the download is complete, convert the links in the document to make them
suitable for local viewing. This affects not only the visible hyperlinks, but
any part of the document that links to external content, such as embedded images,
links to style sheets, hyperlinks to non-html content, etc.

Each link will be changed in one of the two ways:

    The links to files that have been downloaded by Wget will be changed to refer
    to the file they point to as a relative link.

    Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also
    downloaded, then the link in doc.html will be modified to point to
    ‘../bar/img.gif’. This kind of transformation works reliably for arbitrary
    combinations of directories.

    The links to files that have not been downloaded by Wget will be changed to
    include host name and absolute path of the location they point to.

    Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to
    ../bar/img.gif), then the link in doc.html will be modified to point to
    http://hostname/bar/img.gif. 

Because of this, local browsing works reliably: if a linked file was downloaded,
the link will refer to its local name; if it was not downloaded, the link will
refer to its full Internet address rather than presenting a broken link. The fact
that the former links are converted to relative links ensures that you can move
the downloaded hierarchy to another directory.

Note that only at the end of the download can Wget know which links have been
downloaded. Because of that, the work done by ‘-k’ will be performed at the end
of all the downloads. 
Run Code Online (Sandbox Code Playgroud)

  • 如果你发现你仍然缺少图像等...然后尝试添加这个:-e robots = off ..... wget实际上读取并尊重robots.txt - 这真的让我很难弄清楚为什么没有成功了! (42认同)
  • 从外部主机获取资源使用`-H, - span-hosts` (22认同)
  • 如果您在没有用户代理的情况下使用wget,某些服务器将使用403代码进行响应,您可以添加`-U'Mozilla/5.0(X11; U; Linux i686; en-US; rv:1.8.1.6)Gecko/20070802 SeaMonkey/1.1.4'` (11认同)
  • 整个网站:http://snipplr.com/view/23838/downloading-an-entire-web-site-with-wget/ (3认同)
  • 我尝试了这个,但不知何故内部链接,如`index.html#link-to-element-on-same-page`停止工作. (2认同)