다운로드 가능한 모든 컨텐츠를 포함하여 wget (또는 기타)으로 전체 웹 사이트를 다운로드하십시오.


23

winamp의 웹 사이트를 종료하려고 할 때 다운로드하려고합니다. 말 그대로 모든 것을 다운로드해야합니다.

나는 한 번 시도했지만 wget웹 사이트 자체를 다운로드했지만 파일을 다운로드하려고하면 확장명이나 이름이없는 파일을 제공합니다. 어떻게 고칠 수 있습니까?

답변:


20

웹 사이트를 완전히 미러링해야 할 수도 있지만 일부 링크는 실제로 죽었을 수 있습니다. HTTrack 또는 wget을 사용할 수 있습니다.

wget -r http://winapp.com # or whatever

HTTrack으로 먼저 설치하십시오 :

sudo apt-get install httrack

이제 하나의 외부 링크 만 실행하십시오.

httrack --ext-depth=1 http://winapp.com

그러면 winapp CDN 파일이 다운로드되지만 전체 인터넷 파일의 파일은 다운로드되지 않습니다.


3
wget -p -k http://somewebsite.com

에서 man wget

-p
--page-requisites
   This option causes Wget to download all the files that are
   necessary to properly display a given HTML page.  This includes
   such things as inlined images, sounds, and referenced stylesheets.

   Ordinarily, when downloading a single HTML page, any requisite
   documents that may be needed to display it properly are not
   downloaded.  Using -r together with -l can help, but since Wget
   does not ordinarily distinguish between external and inlined
   documents, one is generally left with "leaf documents" that are
   missing their requisites.

   For instance, say document 1.html contains an "<IMG>" tag
   referencing 1.gif and an "<A>" tag pointing to external document
   2.html.  Say that 2.html is similar but that its image is 2.gif and
   it links to 3.html.  Say this continues up to some arbitrarily high
   number.

   If one executes the command:

           wget -r -l 2 http://<site>/1.html

   then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded.
   As you can see, 3.html is without its requisite 3.gif because Wget
   is simply counting the number of hops (up to 2) away from 1.html in
   order to determine where to stop the recursion.  However, with this
   command:

           wget -r -l 2 -p http://<site>/1.html

   all the above files and 3.html's requisite 3.gif will be
   downloaded.  Similarly,

           wget -r -l 1 -p http://<site>/1.html

   will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded.  One
   might think that:

           wget -r -l 0 -p http://<site>/1.html

   would download just 1.html and 1.gif, but unfortunately this is not
   the case, because -l 0 is equivalent to -l inf---that is, infinite
   recursion.  To download a single HTML page (or a handful of them,
   all specified on the command-line or in a -i URL input file) and
   its (or their) requisites, simply leave off -r and -l:

           wget -p http://<site>/1.html

   Note that Wget will behave as if -r had been specified, but only
   that single page and its requisites will be downloaded.Links from
   that page to external documents will not be followed.  Actually, to
   download a single page and all its requisites (even if they exist
   on separate websites), and make sure the lot displays properly
   locally, this author likes to use a few options in addition to -p:

          wget -E -H -k -K -p http://<site>/<document>

   To finish off this topic, it's worth knowing that Wget's idea of an
   external document link is any URL specified in an "<A>" tag, an
   "<AREA>" tag, or a "<LINK>" tag other than "<LINK
   REL="stylesheet">".

  ==================================================================

 -k
 --convert-links
   After the download is complete, convert the links in the document to make them suitable for local viewing.  This affects not only the visible hyperlinks, but any part of the document that
   links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.

   Each link will be changed in one of the two ways:

   ·   The links to files that have been downloaded by Wget will be changed to refer to the file they point to as a relative link.

       Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the link in doc.html will be modified to point to ../bar/img.gif.  This kind of transformation
       works reliably for arbitrary combinations of directories.

   ·   The links to files that have not been downloaded by Wget will be changed to include host name and absolute path of the location they point to.

       Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link in doc.html will be modified to point to http://hostname/bar/img.gif.

   Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet
   address rather than presenting a broken link.  The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory.

   Note that only at the end of the download can Wget know which links have been downloaded.  Because of that, the work done by -k will be performed at the end of all the downloads.

  --convert-file-only
   This option converts only the filename part of the URLs, leaving the rest of the URLs untouched. This filename part is sometimes referred to as the "basename", although we avoid that term
   here in order not to cause confusion.

   It works particularly well in conjunction with --adjust-extension, although this coupling is not enforced. It proves useful to populate Internet caches with files downloaded from different
   hosts.

   Example: if some link points to //foo.com/bar.cgi?xyz with --adjust-extension asserted and its local destination is intended to be ./foo.com/bar.cgi?xyz.css, then the link would be converted
   to //foo.com/bar.cgi?xyz.css. Note that only the filename part has been modified. The rest of the URL has been left untouched, including the net path ("//") which would otherwise be
   processed by Wget and converted to the effective scheme (ie. "http://").

내 들여 쓰기가 유감입니다 :(


-1

당신이 가지고있는 링크와 관련된 모든 것을 다운로드하려면 이것을 시도 할 수 있습니다

wget -r -U "BrowserName" "Url"

당신은 --wait="duration"당신의 IP가 차단되는 것을 피하기 위해 사용할 수 있습니다 . 대기 기간없이 페이지마다 이상한 요청 페이지가 있습니다. 그건 인간이 아니야


1
Ask Ubuntu에 오신 것을 환영합니다! 당신의 문법을 향상 시키거나 적어도 제안 된 편집을 승인하기 위해이 답변이 크게 개선 될 것입니다.
anonymous2

wget -m대신 사용할 수도 있습니다-r
tricasse

2
더 이상 블록을 피하기 위해 --random-wait함께 사용하십시오 --wait=X.
Patrick

@Patrick 전체 답변을 게시 하시겠습니까? 귀하의 의견이 흥미롭게 들립니다.
WinEunuuchs2Unix
당사 사이트를 사용함과 동시에 당사의 쿠키 정책개인정보 보호정책을 읽고 이해하였음을 인정하는 것으로 간주합니다.
Licensed under cc by-sa 3.0 with attribution required.