如何使用 wget 保存网页及其对象?

How to save web page with its objects using wget?

在浏览器中显示的网页由HTML文档和一些对象组成,例如CSS、JS、图像等。我想将它们全部保存在我的硬盘上使用 wget 命令稍后从本地计算机加载它。有机会吗?

注意:我想要一个页面,而不是网站的所有页面或类似的页面。

使用以下命令:

wget -E  -k -p http://example.com

开关详情:

-E :

If a file of type application/xhtml+xml or text/html is downloaded and the URL does not end with the regexp .[Hh][Tt][Mm][Ll]?, this option will cause the suffix .html to be appended to the local filename. This is useful, for instance, when you're mirroring a remote site that uses .asp pages, but you want the mirrored pages to be viewable on your stock Apache server. Another good use for this is when you're downloading CGI-generated materials. A URL like http://example.com/article.cgi?25 will be saved as article.cgi?25.html.

-k

After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.

-p

This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets.