This is wget.info, produced by makeinfo version 4.3 from ./wget.texi.
INFO-DIR-SECTION Network Applications
START-INFO-DIR-ENTRY
* Wget: (wget). The non-interactive network downloader.
END-INFO-DIR-ENTRY
This file documents the the GNU Wget utility for downloading network
data.
Copyright (C) 1996, 1997, 1998, 2000, 2001, 2002, 2003 Free Software
Foundation, Inc.
Permission is granted to make and distribute verbatim copies of this
manual provided the copyright notice and this permission notice are
preserved on all copies.
Permission is granted to copy, distribute and/or modify this document
under the terms of the GNU Free Documentation License, Version 1.1 or
any later version published by the Free Software Foundation; with the
Invariant Sections being "GNU General Public License" and "GNU Free
Documentation License", with no Front-Cover Texts, and with no
Back-Cover Texts. A copy of the license is included in the section
entitled "GNU Free Documentation License".
File: wget.info, Node: Recursive Retrieval Options, Next: Recursive Accept/Reject Options, Prev: FTP Options, Up: Invoking
Recursive Retrieval Options
===========================
`-r'
`--recursive'
Turn on recursive retrieving. *Note Recursive Retrieval::, for
more details.
`-l DEPTH'
`--level=DEPTH'
Specify recursion maximum depth level DEPTH (*note Recursive
Retrieval::). The default maximum depth is 5.
`--delete-after'
This option tells Wget to delete every single file it downloads,
_after_ having done so. It is useful for pre-fetching popular
pages through a proxy, e.g.:
wget -r -nd --delete-after http://whatever.com/~popular/page/
The `-r' option is to retrieve recursively, and `-nd' to not
create directories.
Note that `--delete-after' deletes files on the local machine. It
does not issue the `DELE' command to remote FTP sites, for
instance. Also note that when `--delete-after' is specified,
`--convert-links' is ignored, so `.orig' files are simply not
created in the first place.
`-k'
`--convert-links'
After the download is complete, convert the links in the document
to make them suitable for local viewing. This affects not only
the visible hyperlinks, but any part of the document that links to
external content, such as embedded images, links to style sheets,
hyperlinks to non-HTML content, etc.
Each link will be changed in one of the two ways:
* The links to files that have been downloaded by Wget will be
changed to refer to the file they point to as a relative link.
Example: if the downloaded file `/foo/doc.html' links to
`/bar/img.gif', also downloaded, then the link in `doc.html'
will be modified to point to `../bar/img.gif'. This kind of
transformation works reliably for arbitrary combinations of
directories.
* The links to files that have not been downloaded by Wget will
be changed to include host name and absolute path of the
location they point to.
Example: if the downloaded file `/foo/doc.html' links to
`/bar/img.gif' (or to `../bar/img.gif'), then the link in
`doc.html' will be modified to point to
`http://HOSTNAME/bar/img.gif'.
Because of this, local browsing works reliably: if a linked file
was downloaded, the link will refer to its local name; if it was
not downloaded, the link will refer to its full Internet address
rather than presenting a broken link. The fact that the former
links are converted to relative links ensures that you can move
the downloaded hierarchy to another directory.
Note that only at the end of the download can Wget know which
links have been downloaded. Because of that, the work done by
`-k' will be performed at the end of all the downloads.
`-K'
`--backup-converted'
When converting a file, back up the original version with a `.orig'
suffix. Affects the behavior of `-N' (*note HTTP Time-Stamping
Internals::).
`-m'
`--mirror'
Turn on options suitable for mirroring. This option turns on
recursion and time-stamping, sets infinite recursion depth and
keeps FTP directory listings. It is currently equivalent to `-r
-N -l inf -nr'.
`-p'
`--page-requisites'
This option causes Wget to download all the files that are
necessary to properly display a given HTML page. This includes
such things as inlined images, sounds, and referenced stylesheets.
Ordinarily, when downloading a single HTML page, any requisite
documents that may be needed to display it properly are not
downloaded. Using `-r' together with `-l' can help, but since
Wget does not ordinarily distinguish between external and inlined
documents, one is generally left with "leaf documents" that are
missing their requisites.
For instance, say document `1.html' contains an `' tag
referencing `1.gif' and an `' tag pointing to external document
`2.html'. Say that `2.html' is similar but that its image is
`2.gif' and it links to `3.html'. Say this continues up to some
arbitrarily high number.
If one executes the command:
wget -r -l 2 http://SITE/1.html
then `1.html', `1.gif', `2.html', `2.gif', and `3.html' will be
downloaded. As you can see, `3.html' is without its requisite
`3.gif' because Wget is simply counting the number of hops (up to
2) away from `1.html' in order to determine where to stop the
recursion. However, with this command:
wget -r -l 2 -p http://SITE/1.html
all the above files _and_ `3.html''s requisite `3.gif' will be
downloaded. Similarly,
wget -r -l 1 -p http://SITE/1.html
will cause `1.html', `1.gif', `2.html', and `2.gif' to be
downloaded. One might think that:
wget -r -l 0 -p http://SITE/1.html
would download just `1.html' and `1.gif', but unfortunately this
is not the case, because `-l 0' is equivalent to `-l inf'--that
is, infinite recursion. To download a single HTML page (or a
handful of them, all specified on the command-line or in a `-i'
URL input file) and its (or their) requisites, simply leave off
`-r' and `-l':
wget -p http://SITE/1.html
Note that Wget will behave as if `-r' had been specified, but only
that single page and its requisites will be downloaded. Links
from that page to external documents will not be followed.
Actually, to download a single page and all its requisites (even
if they exist on separate websites), and make sure the lot
displays properly locally, this author likes to use a few options
in addition to `-p':
wget -E -H -k -K -p http://SITE/DOCUMENT
To finish off this topic, it's worth knowing that Wget's idea of an
external document link is any URL specified in an `' tag, an
`' tag, or a `' tag other than `'.
`--strict-comments'
Turn on strict parsing of HTML comments. The default is to
terminate comments at the first occurrence of `-->'.
According to specifications, HTML comments are expressed as SGML
"declarations". Declaration is special markup that begins with
`', such as `', that may contain
comments between a pair of `--' delimiters. HTML comments are
"empty declarations", SGML declarations without any non-comment
text. Therefore, `' is a valid comment, and so is
`', but `' is not.
On the other hand, most HTML writers don't perceive comments as
anything other than text delimited with `', which is
not quite the same. For example, something like `'
works as a valid comment as long as the number of dashes is a
multiple of four (!). If not, the comment technically lasts until
the next `--', which may be at the other end of the document.
Because of this, many popular browsers completely ignore the
specification and implement what users have come to expect:
comments delimited with `'.
Until version 1.9, Wget interpreted comments strictly, which
resulted in missing links in many web pages that displayed fine in
browsers, but had the misfortune of containing non-compliant
comments. Beginning with version 1.9, Wget has joined the ranks
of clients that implements "naive" comments, terminating each
comment at the first occurrence of `-->'.
If, for whatever reason, you want strict comment parsing, use this
option to turn it on.
File: wget.info, Node: Recursive Accept/Reject Options, Prev: Recursive Retrieval Options, Up: Invoking
Recursive Accept/Reject Options
===============================
`-A ACCLIST --accept ACCLIST'
`-R REJLIST --reject REJLIST'
Specify comma-separated lists of file name suffixes or patterns to
accept or reject (*note Types of Files:: for more details).
`-D DOMAIN-LIST'
`--domains=DOMAIN-LIST'
Set domains to be followed. DOMAIN-LIST is a comma-separated list
of domains. Note that it does _not_ turn on `-H'.
`--exclude-domains DOMAIN-LIST'
Specify the domains that are _not_ to be followed. (*note
Spanning Hosts::).
`--follow-ftp'
Follow FTP links from HTML documents. Without this option, Wget
will ignore all the FTP links.
`--follow-tags=LIST'
Wget has an internal table of HTML tag / attribute pairs that it
considers when looking for linked documents during a recursive
retrieval. If a user wants only a subset of those tags to be
considered, however, he or she should be specify such tags in a
comma-separated LIST with this option.
`-G LIST'
`--ignore-tags=LIST'
This is the opposite of the `--follow-tags' option. To skip
certain HTML tags when recursively looking for documents to
download, specify them in a comma-separated LIST.
In the past, the `-G' option was the best bet for downloading a
single page and its requisites, using a command-line like:
wget -Ga,area -H -k -K -r http://SITE/DOCUMENT
However, the author of this option came across a page with tags
like `' and came to the realization that
`-G' was not enough. One can't just tell Wget to ignore `',
because then stylesheets will not be downloaded. Now the best bet
for downloading a single page and its requisites is the dedicated
`--page-requisites' option.
`-H'
`--span-hosts'
Enable spanning across hosts when doing recursive retrieving
(*note Spanning Hosts::).
`-L'
`--relative'
Follow relative links only. Useful for retrieving a specific home
page without any distractions, not even those from the same hosts
(*note Relative Links::).
`-I LIST'
`--include-directories=LIST'
Specify a comma-separated list of directories you wish to follow
when downloading (*note Directory-Based Limits:: for more
details.) Elements of LIST may contain wildcards.
`-X LIST'
`--exclude-directories=LIST'
Specify a comma-separated list of directories you wish to exclude
from download (*note Directory-Based Limits:: for more details.)
Elements of LIST may contain wildcards.
`-np'
`--no-parent'
Do not ever ascend to the parent directory when retrieving
recursively. This is a useful option, since it guarantees that
only the files _below_ a certain hierarchy will be downloaded.
*Note Directory-Based Limits::, for more details.
File: wget.info, Node: Recursive Retrieval, Next: Following Links, Prev: Invoking, Up: Top
Recursive Retrieval
*******************
GNU Wget is capable of traversing parts of the Web (or a single HTTP
or FTP server), following links and directory structure. We refer to
this as to "recursive retrieval", or "recursion".
With HTTP URLs, Wget retrieves and parses the HTML from the given
URL, documents, retrieving the files the HTML document was referring
to, through markup like `href', or `src'. If the freshly downloaded
file is also of type `text/html' or `application/xhtml+xml', it will be
parsed and followed further.
Recursive retrieval of HTTP and HTML content is "breadth-first".
This means that Wget first downloads the requested HTML document, then
the documents linked from that document, then the documents linked by
them, and so on. In other words, Wget first downloads the documents at
depth 1, then those at depth 2, and so on until the specified maximum
depth.
The maximum "depth" to which the retrieval may descend is specified
with the `-l' option. The default maximum depth is five layers.
When retrieving an FTP URL recursively, Wget will retrieve all the
data from the given directory tree (including the subdirectories up to
the specified depth) on the remote server, creating its mirror image
locally. FTP retrieval is also limited by the `depth' parameter.
Unlike HTTP recursion, FTP recursion is performed depth-first.
By default, Wget will create a local directory tree, corresponding to
the one found on the remote server.
Recursive retrieving can find a number of applications, the most
important of which is mirroring. It is also useful for WWW
presentations, and any other opportunities where slow network
connections should be bypassed by storing the files locally.
You should be warned that recursive downloads can overload the remote
servers. Because of that, many administrators frown upon them and may
ban access from your site if they detect very fast downloads of big
amounts of content. When downloading from Internet servers, consider
using the `-w' option to introduce a delay between accesses to the
server. The download will take a while longer, but the server
administrator will not be alarmed by your rudeness.
Of course, recursive download may cause problems on your machine. If
left to run unchecked, it can easily fill up the disk. If downloading
from local network, it can also take bandwidth on the system, as well as
consume memory and CPU.
Try to specify the criteria that match the kind of download you are
trying to achieve. If you want to download only one page, use
`--page-requisites' without any additional recursion. If you want to
download things under one directory, use `-np' to avoid downloading
things from other directories. If you want to download all the files
from one directory, use `-l 1' to make sure the recursion depth never
exceeds one. *Note Following Links::, for more information about this.
Recursive retrieval should be used with care. Don't say you were not
warned.
File: wget.info, Node: Following Links, Next: Time-Stamping, Prev: Recursive Retrieval, Up: Top
Following Links
***************
When retrieving recursively, one does not wish to retrieve loads of
unnecessary data. Most of the time the users bear in mind exactly what
they want to download, and want Wget to follow only specific links.
For example, if you wish to download the music archive from
`fly.srk.fer.hr', you will not want to download all the home pages that
happen to be referenced by an obscure part of the archive.
Wget possesses several mechanisms that allows you to fine-tune which
links it will follow.
* Menu:
* Spanning Hosts:: (Un)limiting retrieval based on host name.
* Types of Files:: Getting only certain files.
* Directory-Based Limits:: Getting only certain directories.
* Relative Links:: Follow relative links only.
* FTP Links:: Following FTP links.
File: wget.info, Node: Spanning Hosts, Next: Types of Files, Prev: Following Links, Up: Following Links
Spanning Hosts
==============
Wget's recursive retrieval normally refuses to visit hosts different
than the one you specified on the command line. This is a reasonable
default; without it, every retrieval would have the potential to turn
your Wget into a small version of google.
However, visiting different hosts, or "host spanning," is sometimes
a useful option. Maybe the images are served from a different server.
Maybe you're mirroring a site that consists of pages interlinked between
three servers. Maybe the server has two equivalent names, and the HTML
pages refer to both interchangeably.
Span to any host--`-H'
The `-H' option turns on host spanning, thus allowing Wget's
recursive run to visit any host referenced by a link. Unless
sufficient recursion-limiting criteria are applied depth, these
foreign hosts will typically link to yet more hosts, and so on
until Wget ends up sucking up much more data than you have
intended.
Limit spanning to certain domains--`-D'
The `-D' option allows you to specify the domains that will be
followed, thus limiting the recursion only to the hosts that
belong to these domains. Obviously, this makes sense only in
conjunction with `-H'. A typical example would be downloading the
contents of `www.server.com', but allowing downloads from
`images.server.com', etc.:
wget -rH -Dserver.com http://www.server.com/
You can specify more than one address by separating them with a
comma, e.g. `-Ddomain1.com,domain2.com'.
Keep download off certain domains--`--exclude-domains'
If there are domains you want to exclude specifically, you can do
it with `--exclude-domains', which accepts the same type of
arguments of `-D', but will _exclude_ all the listed domains. For
example, if you want to download all the hosts from `foo.edu'
domain, with the exception of `sunsite.foo.edu', you can do it like
this:
wget -rH -Dfoo.edu --exclude-domains sunsite.foo.edu \
http://www.foo.edu/
File: wget.info, Node: Types of Files, Next: Directory-Based Limits, Prev: Spanning Hosts, Up: Following Links
Types of Files
==============
When downloading material from the web, you will often want to
restrict the retrieval to only certain file types. For example, if you
are interested in downloading GIFs, you will not be overjoyed to get
loads of PostScript documents, and vice versa.
Wget offers two options to deal with this problem. Each option
description lists a short name, a long name, and the equivalent command
in `.wgetrc'.
`-A ACCLIST'
`--accept ACCLIST'
`accept = ACCLIST'
The argument to `--accept' option is a list of file suffixes or
patterns that Wget will download during recursive retrieval. A
suffix is the ending part of a file, and consists of "normal"
letters, e.g. `gif' or `.jpg'. A matching pattern contains
shell-like wildcards, e.g. `books*' or `zelazny*196[0-9]*'.
So, specifying `wget -A gif,jpg' will make Wget download only the
files ending with `gif' or `jpg', i.e. GIFs and JPEGs. On the
other hand, `wget -A "zelazny*196[0-9]*"' will download only files
beginning with `zelazny' and containing numbers from 1960 to 1969
anywhere within. Look up the manual of your shell for a
description of how pattern matching works.
Of course, any number of suffixes and patterns can be combined
into a comma-separated list, and given as an argument to `-A'.
`-R REJLIST'
`--reject REJLIST'
`reject = REJLIST'
The `--reject' option works the same way as `--accept', only its
logic is the reverse; Wget will download all files _except_ the
ones matching the suffixes (or patterns) in the list.
So, if you want to download a whole page except for the cumbersome
MPEGs and .AU files, you can use `wget -R mpg,mpeg,au'.
Analogously, to download all files except the ones beginning with
`bjork', use `wget -R "bjork*"'. The quotes are to prevent
expansion by the shell.
The `-A' and `-R' options may be combined to achieve even better
fine-tuning of which files to retrieve. E.g. `wget -A "*zelazny*" -R
.ps' will download all the files having `zelazny' as a part of their
name, but _not_ the PostScript files.
Note that these two options do not affect the downloading of HTML
files; Wget must load all the HTMLs to know where to go at
all--recursive retrieval would make no sense otherwise.
File: wget.info, Node: Directory-Based Limits, Next: Relative Links, Prev: Types of Files, Up: Following Links
Directory-Based Limits
======================
Regardless of other link-following facilities, it is often useful to
place the restriction of what files to retrieve based on the directories
those files are placed in. There can be many reasons for this--the
home pages may be organized in a reasonable directory structure; or some
directories may contain useless information, e.g. `/cgi-bin' or `/dev'
directories.
Wget offers three different options to deal with this requirement.
Each option description lists a short name, a long name, and the
equivalent command in `.wgetrc'.
`-I LIST'
`--include LIST'
`include_directories = LIST'
`-I' option accepts a comma-separated list of directories included
in the retrieval. Any other directories will simply be ignored.
The directories are absolute paths.
So, if you wish to download from `http://host/people/bozo/'
following only links to bozo's colleagues in the `/people'
directory and the bogus scripts in `/cgi-bin', you can specify:
wget -I /people,/cgi-bin http://host/people/bozo/
`-X LIST'
`--exclude LIST'
`exclude_directories = LIST'
`-X' option is exactly the reverse of `-I'--this is a list of
directories _excluded_ from the download. E.g. if you do not want
Wget to download things from `/cgi-bin' directory, specify `-X
/cgi-bin' on the command line.
The same as with `-A'/`-R', these two options can be combined to
get a better fine-tuning of downloading subdirectories. E.g. if
you want to load all the files from `/pub' hierarchy except for
`/pub/worthless', specify `-I/pub -X/pub/worthless'.
`-np'
`--no-parent'
`no_parent = on'
The simplest, and often very useful way of limiting directories is
disallowing retrieval of the links that refer to the hierarchy
"above" than the beginning directory, i.e. disallowing ascent to
the parent directory/directories.
The `--no-parent' option (short `-np') is useful in this case.
Using it guarantees that you will never leave the existing
hierarchy. Supposing you issue Wget with:
wget -r --no-parent http://somehost/~luzer/my-archive/
You may rest assured that none of the references to
`/~his-girls-homepage/' or `/~luzer/all-my-mpegs/' will be
followed. Only the archive you are interested in will be
downloaded. Essentially, `--no-parent' is similar to
`-I/~luzer/my-archive', only it handles redirections in a more
intelligent fashion.
File: wget.info, Node: Relative Links, Next: FTP Links, Prev: Directory-Based Limits, Up: Following Links
Relative Links
==============
When `-L' is turned on, only the relative links are ever followed.
Relative links are here defined those that do not refer to the web
server root. For example, these links are relative:
These links are not relative:
Using this option guarantees that recursive retrieval will not span
hosts, even without `-H'. In simple cases it also allows downloads to
"just work" without having to convert links.
This option is probably not very useful and might be removed in a
future release.
File: wget.info, Node: FTP Links, Prev: Relative Links, Up: Following Links
Following FTP Links
===================
The rules for FTP are somewhat specific, as it is necessary for them
to be. FTP links in HTML documents are often included for purposes of
reference, and it is often inconvenient to download them by default.
To have FTP links followed from HTML documents, you need to specify
the `--follow-ftp' option. Having done that, FTP links will span hosts
regardless of `-H' setting. This is logical, as FTP links rarely point
to the same host where the HTTP server resides. For similar reasons,
the `-L' options has no effect on such downloads. On the other hand,
domain acceptance (`-D') and suffix rules (`-A' and `-R') apply
normally.
Also note that followed links to FTP directories will not be
retrieved recursively further.
File: wget.info, Node: Time-Stamping, Next: Startup File, Prev: Following Links, Up: Top
Time-Stamping
*************
One of the most important aspects of mirroring information from the
Internet is updating your archives.
Downloading the whole archive again and again, just to replace a few
changed files is expensive, both in terms of wasted bandwidth and money,
and the time to do the update. This is why all the mirroring tools
offer the option of incremental updating.
Such an updating mechanism means that the remote server is scanned in
search of "new" files. Only those new files will be downloaded in the
place of the old ones.
A file is considered new if one of these two conditions are met:
1. A file of that name does not already exist locally.
2. A file of that name does exist, but the remote file was modified
more recently than the local file.
To implement this, the program needs to be aware of the time of last
modification of both local and remote files. We call this information
the "time-stamp" of a file.
The time-stamping in GNU Wget is turned on using `--timestamping'
(`-N') option, or through `timestamping = on' directive in `.wgetrc'.
With this option, for each file it intends to download, Wget will check
whether a local file of the same name exists. If it does, and the
remote file is older, Wget will not download it.
If the local file does not exist, or the sizes of the files do not
match, Wget will download the remote file no matter what the time-stamps
say.
* Menu:
* Time-Stamping Usage::
* HTTP Time-Stamping Internals::
* FTP Time-Stamping Internals::
File: wget.info, Node: Time-Stamping Usage, Next: HTTP Time-Stamping Internals, Prev: Time-Stamping, Up: Time-Stamping
Time-Stamping Usage
===================
The usage of time-stamping is simple. Say you would like to
download a file so that it keeps its date of modification.
wget -S http://www.gnu.ai.mit.edu/
A simple `ls -l' shows that the time stamp on the local file equals
the state of the `Last-Modified' header, as returned by the server. As
you can see, the time-stamping info is preserved locally, even without
`-N' (at least for HTTP).
Several days later, you would like Wget to check if the remote file
has changed, and download it if it has.
wget -N http://www.gnu.ai.mit.edu/
Wget will ask the server for the last-modified date. If the local
file has the same timestamp as the server, or a newer one, the remote
file will not be re-fetched. However, if the remote file is more
recent, Wget will proceed to fetch it.
The same goes for FTP. For example:
wget "ftp://ftp.ifi.uio.no/pub/emacs/gnus/*"
(The quotes around that URL are to prevent the shell from trying to
interpret the `*'.)
After download, a local directory listing will show that the
timestamps match those on the remote server. Reissuing the command
with `-N' will make Wget re-fetch _only_ the files that have been
modified since the last download.
If you wished to mirror the GNU archive every week, you would use a
command like the following, weekly:
wget --timestamping -r ftp://ftp.gnu.org/pub/gnu/
Note that time-stamping will only work for files for which the server
gives a timestamp. For HTTP, this depends on getting a `Last-Modified'
header. For FTP, this depends on getting a directory listing with
dates in a format that Wget can parse (*note FTP Time-Stamping
Internals::).
File: wget.info, Node: HTTP Time-Stamping Internals, Next: FTP Time-Stamping Internals, Prev: Time-Stamping Usage, Up: Time-Stamping
HTTP Time-Stamping Internals
============================
Time-stamping in HTTP is implemented by checking of the
`Last-Modified' header. If you wish to retrieve the file `foo.html'
through HTTP, Wget will check whether `foo.html' exists locally. If it
doesn't, `foo.html' will be retrieved unconditionally.
If the file does exist locally, Wget will first check its local
time-stamp (similar to the way `ls -l' checks it), and then send a
`HEAD' request to the remote server, demanding the information on the
remote file.
The `Last-Modified' header is examined to find which file was
modified more recently (which makes it "newer"). If the remote file is
newer, it will be downloaded; if it is older, Wget will give up.(1)
When `--backup-converted' (`-K') is specified in conjunction with
`-N', server file `X' is compared to local file `X.orig', if extant,
rather than being compared to local file `X', which will always differ
if it's been converted by `--convert-links' (`-k').
Arguably, HTTP time-stamping should be implemented using the
`If-Modified-Since' request.
---------- Footnotes ----------
(1) As an additional check, Wget will look at the `Content-Length'
header, and compare the sizes; if they are not the same, the remote
file will be downloaded no matter what the time-stamp says.
File: wget.info, Node: FTP Time-Stamping Internals, Prev: HTTP Time-Stamping Internals, Up: Time-Stamping
FTP Time-Stamping Internals
===========================
In theory, FTP time-stamping works much the same as HTTP, only FTP
has no headers--time-stamps must be ferreted out of directory listings.
If an FTP download is recursive or uses globbing, Wget will use the
FTP `LIST' command to get a file listing for the directory containing
the desired file(s). It will try to analyze the listing, treating it
like Unix `ls -l' output, extracting the time-stamps. The rest is
exactly the same as for HTTP. Note that when retrieving individual
files from an FTP server without using globbing or recursion, listing
files will not be downloaded (and thus files will not be time-stamped)
unless `-N' is specified.
Assumption that every directory listing is a Unix-style listing may
sound extremely constraining, but in practice it is not, as many
non-Unix FTP servers use the Unixoid listing format because most (all?)
of the clients understand it. Bear in mind that RFC959 defines no
standard way to get a file list, let alone the time-stamps. We can
only hope that a future standard will define this.
Another non-standard solution includes the use of `MDTM' command
that is supported by some FTP servers (including the popular
`wu-ftpd'), which returns the exact time of the specified file. Wget
may support this command in the future.
File: wget.info, Node: Startup File, Next: Examples, Prev: Time-Stamping, Up: Top
Startup File
************
Once you know how to change default settings of Wget through command
line arguments, you may wish to make some of those settings permanent.
You can do that in a convenient way by creating the Wget startup
file--`.wgetrc'.
Besides `.wgetrc' is the "main" initialization file, it is
convenient to have a special facility for storing passwords. Thus Wget
reads and interprets the contents of `$HOME/.netrc', if it finds it.
You can find `.netrc' format in your system manuals.
Wget reads `.wgetrc' upon startup, recognizing a limited set of
commands.
* Menu:
* Wgetrc Location:: Location of various wgetrc files.
* Wgetrc Syntax:: Syntax of wgetrc.
* Wgetrc Commands:: List of available commands.
* Sample Wgetrc:: A wgetrc example.
File: wget.info, Node: Wgetrc Location, Next: Wgetrc Syntax, Prev: Startup File, Up: Startup File
Wgetrc Location
===============
When initializing, Wget will look for a "global" startup file,
`/usr/local/etc/wgetrc' by default (or some prefix other than
`/usr/local', if Wget was not installed there) and read commands from
there, if it exists.
Then it will look for the user's file. If the environmental variable
`WGETRC' is set, Wget will try to load that file. Failing that, no
further attempts will be made.
If `WGETRC' is not set, Wget will try to load `$HOME/.wgetrc'.
The fact that user's settings are loaded after the system-wide ones
means that in case of collision user's wgetrc _overrides_ the
system-wide wgetrc (in `/usr/local/etc/wgetrc' by default). Fascist
admins, away!
File: wget.info, Node: Wgetrc Syntax, Next: Wgetrc Commands, Prev: Wgetrc Location, Up: Startup File
Wgetrc Syntax
=============
The syntax of a wgetrc command is simple:
variable = value
The "variable" will also be called "command". Valid "values" are
different for different commands.
The commands are case-insensitive and underscore-insensitive. Thus
`DIr__PrefiX' is the same as `dirprefix'. Empty lines, lines beginning
with `#' and lines containing white-space only are discarded.
Commands that expect a comma-separated list will clear the list on an
empty command. So, if you wish to reset the rejection list specified in
global `wgetrc', you can do it with:
reject =
File: wget.info, Node: Wgetrc Commands, Next: Sample Wgetrc, Prev: Wgetrc Syntax, Up: Startup File
Wgetrc Commands
===============
The complete set of commands is listed below. Legal values are
listed after the `='. Simple Boolean values can be set or unset using
`on' and `off' or `1' and `0'. A fancier kind of Boolean allowed in
some cases is the "lockable Boolean", which may be set to `on', `off',
`always', or `never'. If an option is set to `always' or `never', that
value will be locked in for the duration of the Wget
invocation--command-line options will not override.
Some commands take pseudo-arbitrary values. ADDRESS values can be
hostnames or dotted-quad IP addresses. N can be any positive integer,
or `inf' for infinity, where appropriate. STRING values can be any
non-empty string.
Most of these commands have command-line equivalents (*note
Invoking::), though some of the more obscure or rarely used ones do not.
accept/reject = STRING
Same as `-A'/`-R' (*note Types of Files::).
add_hostdir = on/off
Enable/disable host-prefixed file names. `-nH' disables it.
continue = on/off
If set to on, force continuation of preexistent partially retrieved
files. See `-c' before setting it.
background = on/off
Enable/disable going to background--the same as `-b' (which
enables it).
backup_converted = on/off
Enable/disable saving pre-converted files with the suffix
`.orig'--the same as `-K' (which enables it).
base = STRING
Consider relative URLs in URL input files forced to be interpreted
as HTML as being relative to STRING--the same as `-B'.
bind_address = ADDRESS
Bind to ADDRESS, like the `--bind-address' option.
cache = on/off
When set to off, disallow server-caching. See the `-C' option.
convert_links = on/off
Convert non-relative links locally. The same as `-k'.
cookies = on/off
When set to off, disallow cookies. See the `--cookies' option.
load_cookies = FILE
Load cookies from FILE. See `--load-cookies'.
save_cookies = FILE
Save cookies to FILE. See `--save-cookies'.
connect_timeout = N
Set the connect timeout--the same as `--connect-timeout'.
cut_dirs = N
Ignore N remote directory components.
debug = on/off
Debug mode, same as `-d'.
delete_after = on/off
Delete after download--the same as `--delete-after'.
dir_prefix = STRING
Top of directory tree--the same as `-P'.
dirstruct = on/off
Turning dirstruct on or off--the same as `-x' or `-nd',
respectively.
dns_cache = on/off
Turn DNS caching on/off. Since DNS caching is on by default, this
option is normally used to turn it off. Same as `--dns-cache'.
dns_timeout = N
Set the DNS timeout--the same as `--dns-timeout'.
domains = STRING
Same as `-D' (*note Spanning Hosts::).
dot_bytes = N
Specify the number of bytes "contained" in a dot, as seen
throughout the retrieval (1024 by default). You can postfix the
value with `k' or `m', representing kilobytes and megabytes,
respectively. With dot settings you can tailor the dot retrieval
to suit your needs, or you can use the predefined "styles" (*note
Download Options::).
dots_in_line = N
Specify the number of dots that will be printed in each line
throughout the retrieval (50 by default).
dot_spacing = N
Specify the number of dots in a single cluster (10 by default).
exclude_directories = STRING
Specify a comma-separated list of directories you wish to exclude
from download--the same as `-X' (*note Directory-Based Limits::).
exclude_domains = STRING
Same as `--exclude-domains' (*note Spanning Hosts::).
follow_ftp = on/off
Follow FTP links from HTML documents--the same as `--follow-ftp'.
follow_tags = STRING
Only follow certain HTML tags when doing a recursive retrieval,
just like `--follow-tags'.
force_html = on/off
If set to on, force the input filename to be regarded as an HTML
document--the same as `-F'.
ftp_proxy = STRING
Use STRING as FTP proxy, instead of the one specified in
environment.
glob = on/off
Turn globbing on/off--the same as `-g'.
header = STRING
Define an additional header, like `--header'.
html_extension = on/off
Add a `.html' extension to `text/html' or `application/xhtml+xml'
files without it, like `-E'.
http_passwd = STRING
Set HTTP password.
http_proxy = STRING
Use STRING as HTTP proxy, instead of the one specified in
environment.
http_user = STRING
Set HTTP user to STRING.
ignore_length = on/off
When set to on, ignore `Content-Length' header; the same as
`--ignore-length'.
ignore_tags = STRING
Ignore certain HTML tags when doing a recursive retrieval, just
like `-G' / `--ignore-tags'.
include_directories = STRING
Specify a comma-separated list of directories you wish to follow
when downloading--the same as `-I'.
input = STRING
Read the URLs from STRING, like `-i'.
kill_longer = on/off
Consider data longer than specified in content-length header as
invalid (and retry getting it). The default behavior is to save
as much data as there is, provided there is more than or equal to
the value in `Content-Length'.
limit_rate = RATE
Limit the download speed to no more than RATE bytes per second.
The same as `--limit-rate'.
logfile = STRING
Set logfile--the same as `-o'.
login = STRING
Your user name on the remote machine, for FTP. Defaults to
`anonymous'.
mirror = on/off
Turn mirroring on/off. The same as `-m'.
netrc = on/off
Turn reading netrc on or off.
noclobber = on/off
Same as `-nc'.
no_parent = on/off
Disallow retrieving outside the directory hierarchy, like
`--no-parent' (*note Directory-Based Limits::).
no_proxy = STRING
Use STRING as the comma-separated list of domains to avoid in
proxy loading, instead of the one specified in environment.
output_document = STRING
Set the output filename--the same as `-O'.
page_requisites = on/off
Download all ancillary documents necessary for a single HTML page
to display properly--the same as `-p'.
passive_ftp = on/off/always/never
Set passive FTP--the same as `--passive-ftp'. Some scripts and
`.pm' (Perl module) files download files using `wget
--passive-ftp'. If your firewall does not allow this, you can set
`passive_ftp = never' to override the command-line.
passwd = STRING
Set your FTP password to PASSWORD. Without this setting, the
password defaults to `username@hostname.domainname'.
post_data = STRING
Use POST as the method for all HTTP requests and send STRING in
the request body. The same as `--post-data'.
post_file = FILE
Use POST as the method for all HTTP requests and send the contents
of FILE in the request body. The same as `--post-file'.
progress = STRING
Set the type of the progress indicator. Legal types are "dot" and
"bar".
proxy_user = STRING
Set proxy authentication user name to STRING, like `--proxy-user'.
proxy_passwd = STRING
Set proxy authentication password to STRING, like `--proxy-passwd'.
referer = STRING
Set HTTP `Referer:' header just like `--referer'. (Note it was
the folks who wrote the HTTP spec who got the spelling of
"referrer" wrong.)
quiet = on/off
Quiet mode--the same as `-q'.
quota = QUOTA
Specify the download quota, which is useful to put in the global
`wgetrc'. When download quota is specified, Wget will stop
retrieving after the download sum has become greater than quota.
The quota can be specified in bytes (default), kbytes `k'
appended) or mbytes (`m' appended). Thus `quota = 5m' will set
the quota to 5 mbytes. Note that the user's startup file
overrides system settings.
read_timeout = N
Set the read (and write) timeout--the same as `--read-timeout'.
reclevel = N
Recursion level--the same as `-l'.
recursive = on/off
Recursive on/off--the same as `-r'.
relative_only = on/off
Follow only relative links--the same as `-L' (*note Relative
Links::).
remove_listing = on/off
If set to on, remove FTP listings downloaded by Wget. Setting it
to off is the same as `-nr'.
restrict_file_names = unix/windows
Restrict the file names generated by Wget from URLs. See
`--restrict-file-names' for a more detailed description.
retr_symlinks = on/off
When set to on, retrieve symbolic links as if they were plain
files; the same as `--retr-symlinks'.
robots = on/off
Specify whether the norobots convention is respected by Wget, "on"
by default. This switch controls both the `/robots.txt' and the
`nofollow' aspect of the spec. *Note Robot Exclusion::, for more
details about this. Be sure you know what you are doing before
turning this off.
server_response = on/off
Choose whether or not to print the HTTP and FTP server
responses--the same as `-S'.
span_hosts = on/off
Same as `-H'.
strict_comments = on/off
Same as `--strict-comments'.
timeout = N
Set timeout value--the same as `-T'.
timestamping = on/off
Turn timestamping on/off. The same as `-N' (*note
Time-Stamping::).
tries = N
Set number of retries per URL--the same as `-t'.
use_proxy = on/off
Turn proxy support on/off. The same as `-Y'.
verbose = on/off
Turn verbose on/off--the same as `-v'/`-nv'.
wait = N
Wait N seconds between retrievals--the same as `-w'.
waitretry = N
Wait up to N seconds between retries of failed retrievals
only--the same as `--waitretry'. Note that this is turned on by
default in the global `wgetrc'.
randomwait = on/off
Turn random between-request wait times on or off. The same as
`--random-wait'.
File: wget.info, Node: Sample Wgetrc, Prev: Wgetrc Commands, Up: Startup File
Sample Wgetrc
=============
This is the sample initialization file, as given in the distribution.
It is divided in two section--one for global usage (suitable for global
startup file), and one for local usage (suitable for `$HOME/.wgetrc').
Be careful about the things you change.
Note that almost all the lines are commented out. For a command to
have any effect, you must remove the `#' character at the beginning of
its line.
###
### Sample Wget initialization file .wgetrc
###
## You can use this file to change the default behaviour of wget or to
## avoid having to type many many command-line options. This file does
## not contain a comprehensive list of commands -- look at the manual
## to find out what you can put into this file.
##
## Wget initialization file can reside in /usr/local/etc/wgetrc
## (global, for all users) or $HOME/.wgetrc (for a single user).
##
## To use the settings in this file, you will have to uncomment them,
## as well as change them, in most cases, as the values on the
## commented-out lines are the default values (e.g. "off").
##
## Global settings (useful for setting up in /usr/local/etc/wgetrc).
## Think well before you change them, since they may reduce wget's
## functionality, and make it behave contrary to the documentation:
##
# You can set retrieve quota for beginners by specifying a value
# optionally followed by 'K' (kilobytes) or 'M' (megabytes). The
# default quota is unlimited.
#quota = inf
# You can lower (or raise) the default number of retries when
# downloading a file (default is 20).
#tries = 20
# Lowering the maximum depth of the recursive retrieval is handy to
# prevent newbies from going too "deep" when they unwittingly start
# the recursive retrieval. The default is 5.
#reclevel = 5
# Many sites are behind firewalls that do not allow initiation of
# connections from the outside. On these sites you have to use the
# `passive' feature of FTP. If you are behind such a firewall, you
# can turn this on to make Wget use passive FTP by default.
#passive_ftp = off
# The "wait" command below makes Wget wait between every connection.
# If, instead, you want Wget to wait only between retries of failed
# downloads, set waitretry to maximum number of seconds to wait (Wget
# will use "linear backoff", waiting 1 second after the first failure
# on a file, 2 seconds after the second failure, etc. up to this max).
waitretry = 10
##
## Local settings (for a user to set in his $HOME/.wgetrc). It is
## *highly* undesirable to put these settings in the global file, since
## they are potentially dangerous to "normal" users.
##
## Even when setting up your own ~/.wgetrc, you should know what you
## are doing before doing so.
##
# Set this to on to use timestamping by default:
#timestamping = off
# It is a good idea to make Wget send your email address in a `From:'
# header with your request (so that server administrators can contact
# you in case of errors). Wget does *not* send `From:' by default.
#header = From: Your Name
# You can set up other headers, like Accept-Language. Accept-Language
# is *not* sent by default.
#header = Accept-Language: en
# You can set the default proxies for Wget to use for http and ftp.
# They will override the value in the environment.
#http_proxy = http://proxy.yoyodyne.com:18023/
#ftp_proxy = http://proxy.yoyodyne.com:18023/
# If you do not want to use proxy at all, set this to off.
#use_proxy = on
# You can customize the retrieval outlook. Valid options are default,
# binary, mega and micro.
#dot_style = default
# Setting this to off makes Wget not download /robots.txt. Be sure to
# know *exactly* what /robots.txt is and how it is used before changing
# the default!
#robots = on
# It can be useful to make Wget wait between connections. Set this to
# the number of seconds you want Wget to wait.
#wait = 0
# You can force creating directory structure, even if a single is being
# retrieved, by setting this to on.
#dirstruct = off
# You can turn on recursive retrieving by default (don't do this if
# you are not sure you know what it means) by setting this to on.
#recursive = off
# To always back up file X as X.orig before converting its links (due
# to -k / --convert-links / convert_links = on having been specified),
# set this variable to on:
#backup_converted = off
# To have Wget follow FTP links from HTML files by default, set this
# to on:
#follow_ftp = off
File: wget.info, Node: Examples, Next: Various, Prev: Startup File, Up: Top
Examples
********
The examples are divided into three sections loosely based on their
complexity.
* Menu:
* Simple Usage:: Simple, basic usage of the program.
* Advanced Usage:: Advanced tips.
* Very Advanced Usage:: The hairy stuff.
File: wget.info, Node: Simple Usage, Next: Advanced Usage, Prev: Examples, Up: Examples
Simple Usage
============
* Say you want to download a URL. Just type:
wget http://fly.srk.fer.hr/
* But what will happen if the connection is slow, and the file is
lengthy? The connection will probably fail before the whole file
is retrieved, more than once. In this case, Wget will try getting
the file until it either gets the whole of it, or exceeds the
default number of retries (this being 20). It is easy to change
the number of tries to 45, to insure that the whole file will
arrive safely:
wget --tries=45 http://fly.srk.fer.hr/jpg/flyweb.jpg
* Now let's leave Wget to work in the background, and write its
progress to log file `log'. It is tiring to type `--tries', so we
shall use `-t'.
wget -t 45 -o log http://fly.srk.fer.hr/jpg/flyweb.jpg &
The ampersand at the end of the line makes sure that Wget works in
the background. To unlimit the number of retries, use `-t inf'.
* The usage of FTP is as simple. Wget will take care of login and
password.
wget ftp://gnjilux.srk.fer.hr/welcome.msg
* If you specify a directory, Wget will retrieve the directory
listing, parse it and convert it to HTML. Try:
wget ftp://ftp.gnu.org/pub/gnu/
links index.html