Categories
Linux manpage

manpage wget

WGET(1) GNU Wget WGET(1)

NAME

Wget – The non-interactive network downloader.

SYNOPSIS

wget [option]… [URL]…

DESCRIPTION

       GNU Wget is a free utility for non-interactive download of files from the Web.  It supports HTTP, HTTPS, and FTP protocols, as well as retrieval
       through HTTP proxies.

       Wget is non-interactive, meaning that it can work in the background, while the user is not logged on.  This allows you to start a retrieval and
       disconnect from the system, letting Wget finish the work.  By contrast, most of the Web browsers require constant user's presence, which can be a
       great hindrance when transferring a lot of data.

       Wget can follow links in HTML, XHTML, and CSS pages, to create local versions of remote web sites, fully recreating the directory structure of
       the original site.  This is sometimes referred to as "recursive downloading."  While doing that, Wget respects the Robot Exclusion Standard
       (/robots.txt).  Wget can be instructed to convert the links in downloaded files to point at the local files, for offline viewing.

       Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep
       retrying until the whole file has been retrieved.  If the server supports regetting, it will instruct the server to continue the download from
       where it left off.

OPTIONS

Option Syntax

       Since Wget uses GNU getopt to process command-line arguments, every option has a long form along with the short one.  Long options are more
       convenient to remember, but take time to type.  You may freely mix different option styles, or specify options after the command-line arguments.
       Thus you may write:

               wget -r --tries=10 http://fly.srk.fer.hr/ -o log

       The space between the option accepting an argument and the argument may be omitted.  Instead of -o log you can write -olog.

       You may put several options that do not require arguments together, like:

               wget -drc <URL>

       This is completely equivalent to:

               wget -d -r -c <URL>

       Since the options can be specified after the arguments, you may terminate them with --.  So the following will try to download URL -x, reporting
       failure to log:

               wget -o log -- -x

       The options that accept comma-separated lists all respect the convention that specifying an empty list clears its value.  This can be useful to
       clear the .wgetrc settings.  For instance, if your .wgetrc sets "exclude_directories" to /cgi-bin, the following example will first reset it, and
       then set it to exclude /~nobody and /~somebody.  You can also clear the lists in .wgetrc.

               wget -X " -X /~nobody,/~somebody

       Most options that do not accept arguments are boolean options, so named because their state can be captured with a yes-or-no ("boolean")
       variable.  For example, --follow-ftp tells Wget to follow FTP links from HTML files and, on the other hand, --no-glob tells it not to perform
       file globbing on FTP URLs.  A boolean option is either affirmative or negative (beginning with --no).  All such options share several properties.

       Unless stated otherwise, it is assumed that the default behavior is the opposite of what the option accomplishes.  For example, the documented
       existence of --follow-ftp assumes that the default is to not follow FTP links from HTML pages.

       Affirmative options can be negated by prepending the --no- to the option name; negative options can be negated by omitting the --no- prefix.
       This might seem superfluous---if the default for an affirmative option is to not do something, then why provide a way to explicitly turn it off?
       But the startup file may in fact change the default.  For instance, using "follow_ftp = on" in .wgetrc makes Wget follow FTP links by default,
       and using --no-follow-ftp is the only way to restore the factory default from the command line.

Basic Startup Options

       -V
       --version
           Display the version of Wget.

       -h
       --help
           Print a help message describing all of Wget's command-line options.

       -b
       --background
           Go to background immediately after startup.  If no output file is specified via the -o, output is redirected to wget-log.

       -e command
       --execute command
           Execute command as if it were a part of .wgetrc.  A command thus invoked will be executed after the commands in .wgetrc, thus taking
           precedence over them.  If you need to specify more than one wgetrc command, use multiple instances of -e.

Logging and Input File Options

       -o logfile
       --output-file=logfile
           Log all messages to logfile.  The messages are normally reported to standard error.

       -a logfile
       --append-output=logfile
           Append to logfile.  This is the same as -o, only it appends to logfile instead of overwriting the old log file.  If logfile does not exist, a
           new file is created.

       -d
       --debug
           Turn on debug output, meaning various information important to the developers of Wget if it does not work properly.  Your system
           administrator may have chosen to compile Wget without debug support, in which case -d will not work.  Please note that compiling with debug
           support is always safe---Wget compiled with the debug support will not print any debug info unless requested with -d.

       -q
       --quiet
           Turn off Wget's output.

       -v
       --verbose
           Turn on verbose output, with all the available data.  The default output is verbose.

       -nv
       --no-verbose
           Turn off verbose without being completely quiet (use -q for that), which means that error messages and basic information still get printed.

       --report-speed=type
           Output bandwidth as type.  The only accepted value is bits.

       -i file
       --input-file=file
           Read URLs from a local or external file.  If - is specified as file, URLs are read from the standard input.  (Use ./- to read from a file
           literally named -.)

           If this function is used, no URLs need be present on the command line.  If there are URLs both on the command line and in an input file,
           those on the command lines will be the first ones to be retrieved.  If --force-html is not specified, then file should consist of a series of
           URLs, one per line.

           However, if you specify --force-html, the document will be regarded as html.  In that case you may have problems with relative links, which
           you can solve either by adding "<base href="url">" to the documents or by specifying --base=url on the command line.

           If the file is an external one, the document will be automatically treated as html if the Content-Type matches text/html.  Furthermore, the
           file's location will be implicitly used as base href if none was specified.

       --input-metalink=file
           Downloads files covered in local Metalink file. Metalink version 3 and 4 are supported.

       --keep-badhash
           Keeps downloaded Metalink's files with a bad hash. It appends .badhash to the name of Metalink's files which have a checksum mismatch, except
           without overwriting existing files.

       --metalink-over-http
           Issues HTTP HEAD request instead of GET and extracts Metalink metadata from response headers. Then it switches to Metalink download.  If no
           valid Metalink metadata is found, it falls back to ordinary HTTP download.  Enables Content-Type: application/metalink4+xml files
           download/processing.

       --metalink-index=number
           Set the Metalink application/metalink4+xml metaurl ordinal NUMBER. From 1 to the total number of "application/metalink4+xml" available.
           Specify 0 or inf to choose the first good one.  Metaurls, such as those from a --metalink-over-http, may have been sorted by priority key's
           value; keep this in mind to choose the right NUMBER.

       --preferred-location
           Set preferred location for Metalink resources. This has effect if multiple resources with same priority are available.

       --xattr
           Enable use of file system's extended attributes to save the original URL and the Referer HTTP header value if used.

           Be aware that the URL might contain private information like access tokens or credentials.

       -F
       --force-html
           When input is read from a file, force it to be treated as an HTML file.  This enables you to retrieve relative links from existing HTML files
           on your local disk, by adding "<base href="url">" to HTML, or using the --base command-line option.

       -B URL
       --base=URL
           Resolves relative links using URL as the point of reference, when reading links from an HTML file specified via the -i/--input-file option
           (together with --force-html, or when the input file was fetched remotely from a server describing it as HTML). This is equivalent to the
           presence of a "BASE" tag in the HTML input file, with URL as the value for the "href" attribute.

           For instance, if you specify http://foo/bar/a.html for URL, and Wget reads ../baz/b.html from the input file, it would be resolved to
           http://foo/baz/b.html.

       --config=FILE
           Specify the location of a startup file you wish to use instead of the default one(s). Use --no-config to disable reading of config files.  If
           both --config and --no-config are given, --no-config is ignored.

       --rejected-log=logfile
           Logs all URL rejections to logfile as comma separated values.  The values include the reason of rejection, the URL and the parent URL it was
           found in.

Download Options

       --bind-address=ADDRESS
           When making client TCP/IP connections, bind to ADDRESS on the local machine.  ADDRESS may be specified as a hostname or IP address.  This
           option can be useful if your machine is bound to multiple IPs.

       --bind-dns-address=ADDRESS
           [libcares only] This address overrides the route for DNS requests. If you ever need to circumvent the standard settings from
           /etc/resolv.conf, this option together with --dns-servers is your friend.  ADDRESS must be specified either as IPv4 or IPv6 address.  Wget
           needs to be built with libcares for this option to be available.

       --dns-servers=ADDRESSES
           [libcares only] The given address(es) override the standard nameserver addresses,  e.g. as configured in /etc/resolv.conf.  ADDRESSES may be
           specified either as IPv4 or IPv6 addresses, comma-separated.  Wget needs to be built with libcares for this option to be available.

       -t number
       --tries=number
           Set number of tries to number. Specify 0 or inf for infinite retrying.  The default is to retry 20 times, with the exception of fatal errors
           like "connection refused" or "not found" (404), which are not retried.

       -O file
       --output-document=file
           The documents will not be written to the appropriate files, but all will be concatenated together and written to file.  If - is used as file,
           documents will be printed to standard output, disabling link conversion.  (Use ./- to print to a file literally named -.)

           Use of -O is not intended to mean simply "use the name file instead of the one in the URL;" rather, it is analogous to shell redirection:
           wget -O file http://foo is intended to work like wget -O - http://foo > file; file will be truncated immediately, and all downloaded content
           will be written there.

           For this reason, -N (for timestamp-checking) is not supported in combination with -O: since file is always newly created, it will always have
           a very new timestamp. A warning will be issued if this combination is used.

           Similarly, using -r or -p with -O may not work as you expect: Wget won't just download the first file to file and then download the rest to
           their normal names: all downloaded content will be placed in file. This was disabled in version 1.11, but has been reinstated (with a
           warning) in 1.11.2, as there are some cases where this behavior can actually have some use.

           A combination with -nc is only accepted if the given output file does not exist.

           Note that a combination with -k is only permitted when downloading a single document, as in that case it will just convert all relative URIs
           to external ones; -k makes no sense for multiple URIs when they're all being downloaded to a single file; -k can be used only when the output
           is a regular file.

       -nc
       --no-clobber
           If a file is downloaded more than once in the same directory, Wget's behavior depends on a few options, including -nc.  In certain cases, the
           local file will be clobbered, or overwritten, upon repeated download.  In other cases it will be preserved.

           When running Wget without -N, -nc, -r, or -p, downloading the same file in the same directory will result in the original copy of file being
           preserved and the second copy being named file.1.  If that file is downloaded yet again, the third copy will be named file.2, and so on.
           (This is also the behavior with -nd, even if -r or -p are in effect.)  When -nc is specified, this behavior is suppressed, and Wget will
           refuse to download newer copies of file.  Therefore, ""no-clobber"" is actually a misnomer in this mode---it's not clobbering that's
           prevented (as the numeric suffixes were already preventing clobbering), but rather the multiple version saving that's prevented.

           When running Wget with -r or -p, but without -N, -nd, or -nc, re-downloading a file will result in the new copy simply overwriting the old.
           Adding -nc will prevent this behavior, instead causing the original version to be preserved and any newer copies on the server to be ignored.

           When running Wget with -N, with or without -r or -p, the decision as to whether or not to download a newer copy of a file depends on the
           local and remote timestamp and size of the file.  -nc may not be specified at the same time as -N.

           A combination with -O/--output-document is only accepted if the given output file does not exist.

           Note that when -nc is specified, files with the suffixes .html or .htm will be loaded from the local disk and parsed as if they had been
           retrieved from the Web.

       --backups=backups
           Before (over)writing a file, back up an existing file by adding a .1 suffix (_1 on VMS) to the file name.  Such backup files are rotated to
           .2, .3, and so on, up to backups (and lost beyond that).

       --no-netrc
           Do not try to obtain credentials from .netrc file. By default .netrc file is searched for credentials in case none have been passed on
           command line and authentication is required.

       -c
       --continue
           Continue getting a partially-downloaded file.  This is useful when you want to finish up a download started by a previous instance of Wget,
           or by another program.  For instance:

                   wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z

           If there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first portion of the remote file, and will ask the
           server to continue the retrieval from an offset equal to the length of the local file.

           Note that you don't need to specify this option if you just want the current invocation of Wget to retry downloading a file should the
           connection be lost midway through.  This is the default behavior.  -c only affects resumption of downloads started prior to this invocation
           of Wget, and whose local files are still sitting around.

           Without -c, the previous example would just download the remote file to ls-lR.Z.1, leaving the truncated ls-lR.Z file alone.

           If you use -c on a non-empty file, and the server does not support continued downloading, Wget will restart the download from scratch and
           overwrite the existing file entirely.

           Beginning with Wget 1.7, if you use -c on a file which is of equal size as the one on the server, Wget will refuse to download the file and
           print an explanatory message.  The same happens when the file is smaller on the server than locally (presumably because it was changed on the
           server since your last download attempt)---because "continuing" is not meaningful, no download occurs.

           On the other side of the coin, while using -c, any file that's bigger on the server than locally will be considered an incomplete download
           and only "(length(remote) - length(local))" bytes will be downloaded and tacked onto the end of the local file.  This behavior can be
           desirable in certain cases---for instance, you can use wget -c to download just the new portion that's been appended to a data collection or
           log file.

           However, if the file is bigger on the server because it's been changed, as opposed to just appended to, you'll end up with a garbled file.
           Wget has no way of verifying that the local file is really a valid prefix of the remote file.  You need to be especially careful of this when
           using -c in conjunction with -r, since every file will be considered as an "incomplete download" candidate.

           Another instance where you'll get a garbled file if you try to use -c is if you have a lame HTTP proxy that inserts a "transfer interrupted"
           string into the local file.  In the future a "rollback" option may be added to deal with this case.

           Note that -c only works with FTP servers and with HTTP servers that support the "Range" header.

       --start-pos=OFFSET
           Start downloading at zero-based position OFFSET.  Offset may be expressed in bytes, kilobytes with the `k' suffix, or megabytes with the `m'
           suffix, etc.

           --start-pos has higher precedence over --continue.  When --start-pos and --continue are both specified, wget will emit a warning then proceed
           as if --continue was absent.

           Server support for continued download is required, otherwise --start-pos cannot help.  See -c for details.

       --progress=type
           Select the type of the progress indicator you wish to use.  Legal indicators are "dot" and "bar".

           The "bar" indicator is used by default.  It draws an ASCII progress bar graphics (a.k.a "thermometer" display) indicating the status of
           retrieval.  If the output is not a TTY, the "dot" bar will be used by default.

           Use --progress=dot to switch to the "dot" display.  It traces the retrieval by printing dots on the screen, each dot representing a fixed
           amount of downloaded data.

           The progress type can also take one or more parameters.  The parameters vary based on the type selected.  Parameters to type are passed by
           appending them to the type sperated by a colon (:) like this: --progress=type:parameter1:parameter2.

           When using the dotted retrieval, you may set the style by specifying the type as dot:style.  Different styles assign different meaning to one
           dot.  With the "default" style each dot represents 1K, there are ten dots in a cluster and 50 dots in a line.  The "binary" style has a more
           "computer"-like orientation---8K dots, 16-dots clusters and 48 dots per line (which makes for 384K lines).  The "mega" style is suitable for
           downloading large files---each dot represents 64K retrieved, there are eight dots in a cluster, and 48 dots on each line (so each line
           contains 3M).  If "mega" is not enough then you can use the "giga" style---each dot represents 1M retrieved, there are eight dots in a
           cluster, and 32 dots on each line (so each line contains 32M).

           With --progress=bar, there are currently two possible parameters, force and noscroll.

           When the output is not a TTY, the progress bar always falls back to "dot", even if --progress=bar was passed to Wget during invocation. This
           behaviour can be overridden and the "bar" output forced by using the "force" parameter as --progress=bar:force.

           By default, the bar style progress bar scroll the name of the file from left to right for the file being downloaded if the filename exceeds
           the maximum length allotted for its display.  In certain cases, such as with --progress=bar:force, one may not want the scrolling filename in
           the progress bar.  By passing the "noscroll" parameter, Wget can be forced to display as much of the filename as possible without scrolling
           through it.

           Note that you can set the default style using the "progress" command in .wgetrc.  That setting may be overridden from the command line.  For
           example, to force the bar output without scrolling, use --progress=bar:force:noscroll.

       --show-progress
           Force wget to display the progress bar in any verbosity.

           By default, wget only displays the progress bar in verbose mode.  One may however, want wget to display the progress bar on screen in
           conjunction with any other verbosity modes like --no-verbose or --quiet.  This is often a desired a property when invoking wget to download
           several small/large files.  In such a case, wget could simply be invoked with this parameter to get a much cleaner output on the screen.

           This option will also force the progress bar to be printed to stderr when used alongside the --output-file option.

       -N
       --timestamping
           Turn on time-stamping.

       --no-if-modified-since
           Do not send If-Modified-Since header in -N mode. Send preliminary HEAD request instead. This has only effect in -N mode.

       --no-use-server-timestamps
           Don't set the local file's timestamp by the one on the server.

           By default, when a file is downloaded, its timestamps are set to match those from the remote file. This allows the use of --timestamping on
           subsequent invocations of wget. However, it is sometimes useful to base the local file's timestamp on when it was actually downloaded; for
           that purpose, the --no-use-server-timestamps option has been provided.

       -S
       --server-response
           Print the headers sent by HTTP servers and responses sent by FTP servers.

       --spider
           When invoked with this option, Wget will behave as a Web spider, which means that it will not download the pages, just check that they are
           there.  For example, you can use Wget to check your bookmarks:

                   wget --spider --force-html -i bookmarks.html

           This feature needs much more work for Wget to get close to the functionality of real web spiders.

       -T seconds
       --timeout=seconds
           Set the network timeout to seconds seconds.  This is equivalent to specifying --dns-timeout, --connect-timeout, and --read-timeout, all at
           the same time.

           When interacting with the network, Wget can check for timeout and abort the operation if it takes too long.  This prevents anomalies like
           hanging reads and infinite connects.  The only timeout enabled by default is a 900-second read timeout.  Setting a timeout to 0 disables it
           altogether.  Unless you know what you are doing, it is best not to change the default timeout settings.

           All timeout-related options accept decimal values, as well as subsecond values.  For example, 0.1 seconds is a legal (though unwise) choice
           of timeout.  Subsecond timeouts are useful for checking server response times or for testing network latency.

       --dns-timeout=seconds
           Set the DNS lookup timeout to seconds seconds.  DNS lookups that don't complete within the specified time will fail.  By default, there is no
           timeout on DNS lookups, other than that implemented by system libraries.

       --connect-timeout=seconds
           Set the connect timeout to seconds seconds.  TCP connections that take longer to establish will be aborted.  By default, there is no connect
           timeout, other than that implemented by system libraries.

       --read-timeout=seconds
           Set the read (and write) timeout to seconds seconds.  The "time" of this timeout refers to idle time: if, at any point in the download, no
           data is received for more than the specified number of seconds, reading fails and the download is restarted.  This option does not directly
           affect the duration of the entire download.

           Of course, the remote server may choose to terminate the connection sooner than this option requires.  The default read timeout is 900
           seconds.

       --limit-rate=amount
           Limit the download speed to amount bytes per second.  Amount may be expressed in bytes, kilobytes with the k suffix, or megabytes with the m
           suffix.  For example, --limit-rate=20k will limit the retrieval rate to 20KB/s.  This is useful when, for whatever reason, you don't want
           Wget to consume the entire available bandwidth.

           This option allows the use of decimal numbers, usually in conjunction with power suffixes; for example, --limit-rate=2.5k is a legal value.

           Note that Wget implements the limiting by sleeping the appropriate amount of time after a network read that took less time than specified by
           the rate.  Eventually this strategy causes the TCP transfer to slow down to approximately the specified rate.  However, it may take some time
           for this balance to be achieved, so don't be surprised if limiting the rate doesn't work well with very small files.

       -w seconds
       --wait=seconds
           Wait the specified number of seconds between the retrievals.  Use of this option is recommended, as it lightens the server load by making the
           requests less frequent.  Instead of in seconds, the time can be specified in minutes using the "m" suffix, in hours using "h" suffix, or in
           days using "d" suffix.

           Specifying a large value for this option is useful if the network or the destination host is down, so that Wget can wait long enough to
           reasonably expect the network error to be fixed before the retry.  The waiting interval specified by this function is influenced by
           "--random-wait", which see.

       --waitretry=seconds
           If you don't want Wget to wait between every retrieval, but only between retries of failed downloads, you can use this option.  Wget will use
           linear backoff, waiting 1 second after the first failure on a given file, then waiting 2 seconds after the second failure on that file, up to
           the maximum number of seconds you specify.

           By default, Wget will assume a value of 10 seconds.

       --random-wait
           Some web sites may perform log analysis to identify retrieval programs such as Wget by looking for statistically significant similarities in
           the time between requests. This option causes the time between requests to vary between 0.5 and 1.5 * wait seconds, where wait was specified
           using the --wait option, in order to mask Wget's presence from such analysis.

           A 2001 article in a publication devoted to development on a popular consumer platform provided code to perform this analysis on the fly.  Its
           author suggested blocking at the class C address level to ensure automated retrieval programs were blocked despite changing DHCP-supplied
           addresses.

           The --random-wait option was inspired by this ill-advised recommendation to block many unrelated users from a web site due to the actions of
           one.

       --no-proxy
           Don't use proxies, even if the appropriate *_proxy environment variable is defined.

       -Q quota
       --quota=quota
           Specify download quota for automatic retrievals.  The value can be specified in bytes (default), kilobytes (with k suffix), or megabytes
           (with m suffix).

           Note that quota will never affect downloading a single file.  So if you specify wget -Q10k https://example.com/ls-lR.gz, all of the ls-lR.gz
           will be downloaded.  The same goes even when several URLs are specified on the command-line.  The quota is checked only at the end of each
           downloaded file, so it will never result in a partially downloaded file. Thus you may safely type wget -Q2m -i sites---download will be
           aborted after the file that exhausts the quota is completely downloaded.

           Setting quota to 0 or to inf unlimits the download quota.

       --no-dns-cache
           Turn off caching of DNS lookups.  Normally, Wget remembers the IP addresses it looked up from DNS so it doesn't have to repeatedly contact
           the DNS server for the same (typically small) set of hosts it retrieves from.  This cache exists in memory only; a new Wget run will contact
           DNS again.

           However, it has been reported that in some situations it is not desirable to cache host names, even for the duration of a short-running
           application like Wget.  With this option Wget issues a new DNS lookup (more precisely, a new call to "gethostbyname" or "getaddrinfo") each
           time it makes a new connection.  Please note that this option will not affect caching that might be performed by the resolving library or by
           an external caching layer, such as NSCD.

           If you don't understand exactly what this option does, you probably won't need it.

       --restrict-file-names=modes
           Change which characters found in remote URLs must be escaped during generation of local filenames.  Characters that are restricted by this
           option are escaped, i.e. replaced with %HH, where HH is the hexadecimal number that corresponds to the restricted character. This option may
           also be used to force all alphabetical cases to be either lower- or uppercase.

           By default, Wget escapes the characters that are not valid or safe as part of file names on your operating system, as well as control
           characters that are typically unprintable.  This option is useful for changing these defaults, perhaps because you are downloading to a non-
           native partition, or because you want to disable escaping of the control characters, or you want to further restrict characters to only those
           in the ASCII range of values.

           The modes are a comma-separated set of text values. The acceptable values are unix, windows, nocontrol, ascii, lowercase, and uppercase. The
           values unix and windows are mutually exclusive (one will override the other), as are lowercase and uppercase. Those last are special cases,
           as they do not change the set of characters that would be escaped, but rather force local file paths to be converted either to lower- or
           uppercase.

           When "unix" is specified, Wget escapes the character / and the control characters in the ranges 0--31 and 128--159.  This is the default on
           Unix-like operating systems.

           When "windows" is given, Wget escapes the characters \, |, /, :, ?, ", *, <, >, and the control characters in the ranges 0--31 and 128--159.
           In addition to this, Wget in Windows mode uses + instead of : to separate host and port in local file names, and uses @ instead of ? to
           separate the query portion of the file name from the rest.  Therefore, a URL that would be saved as www.xemacs.org:4300/search.pl?input=blah
           in Unix mode would be saved as www.xemacs.org+4300/search.pl@input=blah in Windows mode.  This mode is the default on Windows.

           If you specify nocontrol, then the escaping of the control characters is also switched off. This option may make sense when you are
           downloading URLs whose names contain UTF-8 characters, on a system which can save and display filenames in UTF-8 (some possible byte values
           used in UTF-8 byte sequences fall in the range of values designated by Wget as "controls").

           The ascii mode is used to specify that any bytes whose values are outside the range of ASCII characters (that is, greater than 127) shall be
           escaped. This can be useful when saving filenames whose encoding does not match the one used locally.

       -4
       --inet4-only
       -6
       --inet6-only
           Force connecting to IPv4 or IPv6 addresses.  With --inet4-only or -4, Wget will only connect to IPv4 hosts, ignoring AAAA records in DNS, and
           refusing to connect to IPv6 addresses specified in URLs.  Conversely, with --inet6-only or -6, Wget will only connect to IPv6 hosts and
           ignore A records and IPv4 addresses.

           Neither options should be needed normally.  By default, an IPv6-aware Wget will use the address family specified by the host's DNS record.
           If the DNS responds with both IPv4 and IPv6 addresses, Wget will try them in sequence until it finds one it can connect to.  (Also see
           "--prefer-family" option described below.)

           These options can be used to deliberately force the use of IPv4 or IPv6 address families on dual family systems, usually to aid debugging or
           to deal with broken network configuration.  Only one of --inet6-only and --inet4-only may be specified at the same time.  Neither option is
           available in Wget compiled without IPv6 support.

       --prefer-family=none/IPv4/IPv6
           When given a choice of several addresses, connect to the addresses with specified address family first.  The address order returned by DNS is
           used without change by default.

           This avoids spurious errors and connect attempts when accessing hosts that resolve to both IPv6 and IPv4 addresses from IPv4 networks.  For
           example, www.kame.net resolves to 2001:200:0:8002:203:47ff:fea5:3085 and to 203.178.141.194.  When the preferred family is "IPv4", the IPv4
           address is used first; when the preferred family is "IPv6", the IPv6 address is used first; if the specified value is "none", the address
           order returned by DNS is used without change.

           Unlike -4 and -6, this option doesn't inhibit access to any address family, it only changes the order in which the addresses are accessed.
           Also note that the reordering performed by this option is stable---it doesn't affect order of addresses of the same family.  That is, the
           relative order of all IPv4 addresses and of all IPv6 addresses remains intact in all cases.

       --retry-connrefused
           Consider "connection refused" a transient error and try again.  Normally Wget gives up on a URL when it is unable to connect to the site
           because failure to connect is taken as a sign that the server is not running at all and that retries would not help.  This option is for
           mirroring unreliable sites whose servers tend to disappear for short periods of time.

       --user=user
       --password=password
           Specify the username user and password password for both FTP and HTTP file retrieval.  These parameters can be overridden using the
           --ftp-user and --ftp-password options for FTP connections and the --http-user and --http-password options for HTTP connections.

       --ask-password
           Prompt for a password for each connection established. Cannot be specified when --password is being used, because they are mutually
           exclusive.

       --use-askpass=command
           Prompt for a user and password using the specified command.  If no command is specified then the command in the environment variable
           WGET_ASKPASS is used.  If WGET_ASKPASS is not set then the command in the environment variable SSH_ASKPASS is used.

           You can set the default command for use-askpass in the .wgetrc.  That setting may be overridden from the command line.

       --no-iri
           Turn off internationalized URI (IRI) support. Use --iri to turn it on. IRI support is activated by default.

           You can set the default state of IRI support using the "iri" command in .wgetrc. That setting may be overridden from the command line.

       --local-encoding=encoding
           Force Wget to use encoding as the default system encoding. That affects how Wget converts URLs specified as arguments from locale to UTF-8
           for IRI support.

           Wget use the function "nl_langinfo()" and then the "CHARSET" environment variable to get the locale. If it fails, ASCII is used.

           You can set the default local encoding using the "local_encoding" command in .wgetrc. That setting may be overridden from the command line.

       --remote-encoding=encoding
           Force Wget to use encoding as the default remote server encoding.  That affects how Wget converts URIs found in files from remote encoding to
           UTF-8 during a recursive fetch. This options is only useful for IRI support, for the interpretation of non-ASCII characters.

           For HTTP, remote encoding can be found in HTTP "Content-Type" header and in HTML "Content-Type http-equiv" meta tag.

           You can set the default encoding using the "remoteencoding" command in .wgetrc. That setting may be overridden from the command line.

       --unlink
           Force Wget to unlink file instead of clobbering existing file. This option is useful for downloading to the directory with hardlinks.

Directory Options

       -nd
       --no-directories
           Do not create a hierarchy of directories when retrieving recursively.  With this option turned on, all files will get saved to the current
           directory, without clobbering (if a name shows up more than once, the filenames will get extensions .n).

       -x
       --force-directories
           The opposite of -nd---create a hierarchy of directories, even if one would not have been created otherwise.  E.g. wget -x
           http://fly.srk.fer.hr/robots.txt will save the downloaded file to fly.srk.fer.hr/robots.txt.

       -nH
       --no-host-directories
           Disable generation of host-prefixed directories.  By default, invoking Wget with -r http://fly.srk.fer.hr/ will create a structure of
           directories beginning with fly.srk.fer.hr/.  This option disables such behavior.

       --protocol-directories
           Use the protocol name as a directory component of local file names.  For example, with this option, wget -r http://host will save to
           http/host/... rather than just to host/....

       --cut-dirs=number
           Ignore number directory components.  This is useful for getting a fine-grained control over the directory where recursive retrieval will be
           saved.

           Take, for example, the directory at ftp://ftp.xemacs.org/pub/xemacs/.  If you retrieve it with -r, it will be saved locally under
           ftp ftp.xemacs.org/pub/xemacs/.  While the -nH option can remove the ftp ftp.xemacs.org/ part, you are still stuck with pub/xemacs.  This is where
           --cut-dirs comes in handy; it makes Wget not "see" number remote directory components.  Here are several examples of how --cut-dirs option
           works.

                   No options        -> ftp ftp.xemacs.org/pub/xemacs/
                   -nH               -> pub/xemacs/
                   -nH --cut-dirs=1  -> xemacs/
                   -nH --cut-dirs=2  -> .

–cut-dirs=1 -> ftp.xemacs.org/xemacs/

           If you just want to get rid of the directory structure, this option is similar to a combination of -nd and -P.  However, unlike -nd,
           --cut-dirs does not lose with subdirectories---for instance, with -nH --cut-dirs=1, a beta/ subdirectory will be placed to xemacs/beta, as
           one would expect.

       -P prefix
       --directory-prefix=prefix
           Set directory prefix to prefix.  The directory prefix is the directory where all other files and subdirectories will be saved to, i.e. the
           top of the retrieval tree.  The default is . (the current directory).

HTTP Options

       --default-page=name
           Use name as the default file name when it isn't known (i.e., for URLs that end in a slash), instead of index.html.

       -E
       --adjust-extension
           If a file of type application/xhtml+xml or text/html is downloaded and the URL does not end with the regexp \.[Hh][Tt][Mm][Ll]?, this option
           will cause the suffix .html to be appended to the local filename.  This is useful, for instance, when you're mirroring a remote site that
           uses .asp pages, but you want the mirrored pages to be viewable on your stock Apache server.  Another good use for this is when you're
           downloading CGI-generated materials.  A URL like http://site.com/article.cgi?25 will be saved as article.cgi?25.html.

           Note that filenames changed in this way will be re-downloaded every time you re-mirror a site, because Wget can't tell that the local X.html
           file corresponds to remote URL X (since it doesn't yet know that the URL produces output of type text/html or application/xhtml+xml.

           As of version 1.12, Wget will also ensure that any downloaded files of type text/css end in the suffix .css, and the option was renamed from
           --html-extension, to better reflect its new behavior. The old option name is still acceptable, but should now be considered deprecated.

           As of version 1.19.2, Wget will also ensure that any downloaded files with a "Content-Encoding" of br, compress, deflate or gzip end in the
           suffix .br, .Z, .zlib and .gz respectively.

           At some point in the future, this option may well be expanded to include suffixes for other types of content, including content types that
           are not parsed by Wget.

       --http-user=user
       --http-password=password
           Specify the username user and password password on an HTTP server.  According to the type of the challenge, Wget will encode them using
           either the "basic" (insecure), the "digest", or the Windows "NTLM" authentication scheme.

           Another way to specify username and password is in the URL itself.  Either method reveals your password to anyone who bothers to run "ps".
           To prevent the passwords from being seen, use the --use-askpass or store them in .wgetrc or .netrc, and make sure to protect those files from
           other users with "chmod".  If the passwords are really important, do not leave them lying in those files either---edit the files and delete
           them after Wget has started the download.

       --no-http-keep-alive
           Turn off the "keep-alive" feature for HTTP downloads.  Normally, Wget asks the server to keep the connection open so that, when you download
           more than one document from the same server, they get transferred over the same TCP connection.  This saves time and at the same time reduces
           the load on the server.

           This option is useful when, for some reason, persistent (keep-alive) connections don't work for you, for example due to a server bug or due
           to the inability of server-side scripts to cope with the connections.

       --no-cache
           Disable server-side cache.  In this case, Wget will send the remote server appropriate directives (Cache-Control: no-cache and Pragma: no-
           cache) to get the file from the remote service, rather than returning the cached version. This is especially useful for retrieving and
           flushing out-of-date documents on proxy servers.

           Caching is allowed by default.

       --no-cookies
           Disable the use of cookies.  Cookies are a mechanism for maintaining server-side state.  The server sends the client a cookie using the
           "Set-Cookie" header, and the client responds with the same cookie upon further requests.  Since cookies allow the server owners to keep track
           of visitors and for sites to exchange this information, some consider them a breach of privacy.  The default is to use cookies; however,
           storing cookies is not on by default.

       --load-cookies file
           Load cookies from file before the first HTTP retrieval.  file is a textual file in the format originally used by Netscape's cookies.txt file.

           You will typically use this option when mirroring sites that require that you be logged in to access some or all of their content.  The login
           process typically works by the web server issuing an HTTP cookie upon receiving and verifying your credentials.  The cookie is then resent by
           the browser when accessing that part of the site, and so proves your identity.

           Mirroring such a site requires Wget to send the same cookies your browser sends when communicating with the site.  This is achieved by
           --load-cookies---simply point Wget to the location of the cookies.txt file, and it will send the same cookies your browser would send in the
           same situation.  Different browsers keep textual cookie files in different locations:

           "Netscape 4.x."
               The cookies are in ~/.netscape/cookies.txt.

           "Mozilla and Netscape 6.x."
               Mozilla's cookie file is also named cookies.txt, located somewhere under ~/.mozilla, in the directory of your profile.  The full path
               usually ends up looking somewhat like ~/.mozilla/default/some-weird-string/cookies.txt.

           "Internet Explorer."
               You can produce a cookie file Wget can use by using the File menu, Import and Export, Export Cookies.  This has been tested with Internet
               Explorer 5; it is not guaranteed to work with earlier versions.

           "Other browsers."
               If you are using a different browser to create your cookies, --load-cookies will only work if you can locate or produce a cookie file in
               the Netscape format that Wget expects.

           If you cannot use --load-cookies, there might still be an alternative.  If your browser supports a "cookie manager", you can use it to view
           the cookies used when accessing the site you're mirroring.  Write down the name and value of the cookie, and manually instruct Wget to send
           those cookies, bypassing the "official" cookie support:

                   wget --no-cookies --header "Cookie: <name>=<value>"

       --save-cookies file
           Save cookies to file before exiting.  This will not save cookies that have expired or that have no expiry time (so-called "session cookies"),
           but also see --keep-session-cookies.

       --keep-session-cookies
           When specified, causes --save-cookies to also save session cookies.  Session cookies are normally not saved because they are meant to be kept
           in memory and forgotten when you exit the browser.  Saving them is useful on sites that require you to log in or to visit the home page
           before you can access some pages.  With this option, multiple Wget runs are considered a single browser session as far as the site is
           concerned.

           Since the cookie file format does not normally carry session cookies, Wget marks them with an expiry timestamp of 0.  Wget's --load-cookies
           recognizes those as session cookies, but it might confuse other browsers.  Also note that cookies so loaded will be treated as other session
           cookies, which means that if you want --save-cookies to preserve them again, you must use --keep-session-cookies again.

       --ignore-length
           Unfortunately, some HTTP servers (CGI programs, to be more precise) send out bogus "Content-Length" headers, which makes Wget go wild, as it
           thinks not all the document was retrieved.  You can spot this syndrome if Wget retries getting the same document again and again, each time
           claiming that the (otherwise normal) connection has closed on the very same byte.

           With this option, Wget will ignore the "Content-Length" header---as if it never existed.

       --header=header-line
           Send header-line along with the rest of the headers in each HTTP request.  The supplied header is sent as-is, which means it must contain
           name and value separated by colon, and must not contain newlines.

           You may define more than one additional header by specifying --header more than once.

                   wget --header='Accept-Charset: iso-8859-2' \
                        --header='Accept-Language: hr'        \
                          http://fly.srk.fer.hr/

           Specification of an empty string as the header value will clear all previous user-defined headers.

           As of Wget 1.10, this option can be used to override headers otherwise generated automatically.  This example instructs Wget to connect to
           localhost, but to specify foo.bar in the "Host" header:

                   wget --header="Host: foo.bar" http://localhost/

           In versions of Wget prior to 1.10 such use of --header caused sending of duplicate headers.

       --compression=type
           Choose the type of compression to be used.  Legal values are auto, gzip and none.

           If auto or gzip are specified, Wget asks the server to compress the file using the gzip compression format. If the server compresses the file
           and responds with the "Content-Encoding" header field set appropriately, the file will be decompressed automatically.

           If none is specified, wget will not ask the server to compress the file and will not decompress any server responses. This is the default.

           Compression support is currently experimental. In case it is turned on, please report any bugs to "bug-wget@gnu.org".

       --max-redirect=number
           Specifies the maximum number of redirections to follow for a resource.  The default is 20, which is usually far more than necessary. However,
           on those occasions where you want to allow more (or fewer), this is the option to use.

       --proxy-user=user
       --proxy-password=password
           Specify the username user and password password for authentication on a proxy server.  Wget will encode them using the "basic" authentication
           scheme.

           Security considerations similar to those with --http-password pertain here as well.

       --referer=url
           Include `Referer: url' header in HTTP request.  Useful for retrieving documents with server-side processing that assume they are always being
           retrieved by interactive web browsers and only come out properly when Referer is set to one of the pages that point to them.

       --save-headers
           Save the headers sent by the HTTP server to the file, preceding the actual contents, with an empty line as the separator.

       -U agent-string
       --user-agent=agent-string
           Identify as agent-string to the HTTP server.

           The HTTP protocol allows the clients to identify themselves using a "User-Agent" header field.  This enables distinguishing the WWW software,
           usually for statistical purposes or for tracing of protocol violations.  Wget normally identifies as Wget/version, version being the current
           version number of Wget.

           However, some sites have been known to impose the policy of tailoring the output according to the "User-Agent"-supplied information.  While
           this is not such a bad idea in theory, it has been abused by servers denying information to clients other than (historically) Netscape or,
           more frequently, Microsoft Internet Explorer.  This option allows you to change the "User-Agent" line issued by Wget.  Use of this option is
           discouraged, unless you really know what you are doing.

           Specifying empty user agent with --user-agent="" instructs Wget not to send the "User-Agent" header in HTTP requests.

       --post-data=string
       --post-file=file
           Use POST as the method for all HTTP requests and send the specified data in the request body.  --post-data sends string as data, whereas
           --post-file sends the contents of file.  Other than that, they work in exactly the same way. In particular, they both expect content of the
           form "key1=value1&key2=value2", with percent-encoding for special characters; the only difference is that one expects its content as a
           command-line parameter and the other accepts its content from a file. In particular, --post-file is not for transmitting files as form
           attachments: those must appear as "key=value" data (with appropriate percent-coding) just like everything else. Wget does not currently
           support "multipart/form-data" for transmitting POST data; only "application/x-www-form-urlencoded". Only one of --post-data and --post-file
           should be specified.

           Please note that wget does not require the content to be of the form "key1=value1&key2=value2", and neither does it test for it. Wget will
           simply transmit whatever data is provided to it. Most servers however expect the POST data to be in the above format when processing HTML
           Forms.

           When sending a POST request using the --post-file option, Wget treats the file as a binary file and will send every character in the POST
           request without stripping trailing newline or formfeed characters. Any other control characters in the text will also be sent as-is in the
           POST request.

           Please be aware that Wget needs to know the size of the POST data in advance.  Therefore the argument to "--post-file" must be a regular
           file; specifying a FIFO or something like /dev/stdin won't work.  It's not quite clear how to work around this limitation inherent in
           HTTP/1.0.  Although HTTP/1.1 introduces chunked transfer that doesn't require knowing the request length in advance, a client can't use
           chunked unless it knows it's talking to an HTTP/1.1 server.  And it can't know that until it receives a response, which in turn requires the
           request to have been completed -- a chicken-and-egg problem.

           Note: As of version 1.15 if Wget is redirected after the POST request is completed, its behaviour will depend on the response code returned
           by the server.  In case of a 301 Moved Permanently, 302 Moved Temporarily or 307 Temporary Redirect, Wget will, in accordance with RFC2616,
           continue to send a POST request.  In case a server wants the client to change the Request method upon redirection, it should send a 303 See
           Other response code.

           This example shows how to log in to a server using POST and then proceed to download the desired pages, presumably only accessible to
           authorized users:

                   # Log in to the server.  This can be done only once.
                   wget --save-cookies cookies.txt \
                        --post-data 'user=foo&password=bar' \
                        http://example.com/auth.php

                   # Now grab the page or pages we care about.
                   wget --load-cookies cookies.txt \
                        -p http://example.com/interesting/article.php

           If the server is using session cookies to track user authentication, the above will not work because --save-cookies will not save them (and
           neither will browsers) and the cookies.txt file will be empty.  In that case use --keep-session-cookies along with --save-cookies to force
           saving of session cookies.

       --method=HTTP-Method
           For the purpose of RESTful scripting, Wget allows sending of other HTTP Methods without the need to explicitly set them using
           --header=Header-Line.  Wget will use whatever string is passed to it after --method as the HTTP Method to the server.

       --body-data=Data-String
       --body-file=Data-File
           Must be set when additional data needs to be sent to the server along with the Method specified using --method.  --body-data sends string as
           data, whereas --body-file sends the contents of file.  Other than that, they work in exactly the same way.

           Currently, --body-file is not for transmitting files as a whole.  Wget does not currently support "multipart/form-data" for transmitting
           data; only "application/x-www-form-urlencoded". In the future, this may be changed so that wget sends the --body-file as a complete file
           instead of sending its contents to the server. Please be aware that Wget needs to know the contents of BODY Data in advance, and hence the
           argument to --body-file should be a regular file. See --post-file for a more detailed explanation.  Only one of --body-data and --body-file
           should be specified.

           If Wget is redirected after the request is completed, Wget will suspend the current method and send a GET request till the redirection is
           completed.  This is true for all redirection response codes except 307 Temporary Redirect which is used to explicitly specify that the
           request method should not change.  Another exception is when the method is set to "POST", in which case the redirection rules specified under
           --post-data are followed.

       --content-disposition
           If this is set to on, experimental (not fully-functional) support for "Content-Disposition" headers is enabled. This can currently result in
           extra round-trips to the server for a "HEAD" request, and is known to suffer from a few bugs, which is why it is not currently enabled by
           default.

           This option is useful for some file-downloading CGI programs that use "Content-Disposition" headers to describe what the name of a downloaded
           file should be.

           When combined with --metalink-over-http and --trust-server-names, a Content-Type: application/metalink4+xml file is named using the
           "Content-Disposition" filename field, if available.

       --content-on-error
           If this is set to on, wget will not skip the content when the server responds with a http status code that indicates error.

       --trust-server-names
           If this is set, on a redirect, the local file name will be based on the redirection URL.  By default the local file name is based on the
           original URL.  When doing recursive retrieving this can be helpful because in many web sites redirected URLs correspond to an underlying file
           structure, while link URLs do not.

       --auth-no-challenge
           If this option is given, Wget will send Basic HTTP authentication information (plaintext username and password) for all requests, just like
           Wget 1.10.2 and prior did by default.

           Use of this option is not recommended, and is intended only to support some few obscure servers, which never send HTTP authentication
           challenges, but accept unsolicited auth info, say, in addition to form-based authentication.

       --retry-on-host-error
           Consider host errors, such as "Temporary failure in name resolution", as non-fatal, transient errors.

       --retry-on-http-error=code[,code,...]
           Consider given HTTP response codes as non-fatal, transient errors.  Supply a comma-separated list of 3-digit HTTP response codes as argument.
           Useful to work around special circumstances where retries are required, but the server responds with an error code normally not retried by
           Wget. Such errors might be 503 (Service Unavailable) and 429 (Too Many Requests). Retries enabled by this option are performed subject to the
           normal retry timing and retry count limitations of Wget.

           Using this option is intended to support special use cases only and is generally not recommended, as it can force retries even in cases where
           the server is actually trying to decrease its load. Please use wisely and only if you know what you are doing.

HTTPS (SSL/TLS) Options

       To support encrypted HTTP (HTTPS) downloads, Wget must be compiled with an external SSL library. The current default is GnuTLS.  In addition,
       Wget also supports HSTS (HTTP Strict Transport Security).  If Wget is compiled without SSL support, none of these options are available.

       --secure-protocol=protocol
           Choose the secure protocol to be used.  Legal values are auto, SSLv2, SSLv3, TLSv1, TLSv1_1, TLSv1_2, TLSv1_3 and PFS.  If auto is used, the
           SSL library is given the liberty of choosing the appropriate protocol automatically, which is achieved by sending a TLSv1 greeting. This is
           the default.

           Specifying SSLv2, SSLv3, TLSv1, TLSv1_1, TLSv1_2 or TLSv1_3 forces the use of the corresponding protocol.  This is useful when talking to old
           and buggy SSL server implementations that make it hard for the underlying SSL library to choose the correct protocol version.  Fortunately,
           such servers are quite rare.

           Specifying PFS enforces the use of the so-called Perfect Forward Security cipher suites. In short, PFS adds security by creating a one-time
           key for each SSL connection. It has a bit more CPU impact on client and server.  We use known to be secure ciphers (e.g. no MD4) and the TLS
           protocol. This mode also explicitly excludes non-PFS key exchange methods, such as RSA.

       --https-only
           When in recursive mode, only HTTPS links are followed.

       --ciphers
           Set the cipher list string. Typically this string sets the cipher suites and other SSL/TLS options that the user wish should be used, in a
           set order of preference (GnuTLS calls it 'priority string'). This string will be fed verbatim to the SSL/TLS engine (OpenSSL or GnuTLS) and
           hence its format and syntax is dependent on that. Wget will not process or manipulate it in any way. Refer to the OpenSSL or GnuTLS
           documentation for more information.

       --no-check-certificate
           Don't check the server certificate against the available certificate authorities.  Also don't require the URL host name to match the common
           name presented by the certificate.

           As of Wget 1.10, the default is to verify the server's certificate against the recognized certificate authorities, breaking the SSL handshake
           and aborting the download if the verification fails.  Although this provides more secure downloads, it does break interoperability with some
           sites that worked with previous Wget versions, particularly those using self-signed, expired, or otherwise invalid certificates.  This option
           forces an "insecure" mode of operation that turns the certificate verification errors into warnings and allows you to proceed.

           If you encounter "certificate verification" errors or ones saying that "common name doesn't match requested host name", you can use this
           option to bypass the verification and proceed with the download.  Only use this option if you are otherwise convinced of the site's
           authenticity, or if you really don't care about the validity of its certificate.  It is almost always a bad idea not to check the
           certificates when transmitting confidential or important data.  For self-signed/internal certificates, you should download the certificate
           and verify against that instead of forcing this insecure mode.  If you are really sure of not desiring any certificate verification, you can
           specify --check-certificate=quiet to tell wget to not print any warning about invalid certificates, albeit in most cases this is the wrong
           thing to do.

       --certificate=file
           Use the client certificate stored in file.  This is needed for servers that are configured to require certificates from the clients that
           connect to them.  Normally a certificate is not required and this switch is optional.

       --certificate-type=type
           Specify the type of the client certificate.  Legal values are PEM (assumed by default) and DER, also known as ASN1.

       --private-key=file
           Read the private key from file.  This allows you to provide the private key in a file separate from the certificate.

       --private-key-type=type
           Specify the type of the private key.  Accepted values are PEM (the default) and DER.

       --ca-certificate=file
           Use file as the file with the bundle of certificate authorities ("CA") to verify the peers.  The certificates must be in PEM format.

           Without this option Wget looks for CA certificates at the system-specified locations, chosen at OpenSSL installation time.

       --ca-directory=directory
           Specifies directory containing CA certificates in PEM format.  Each file contains one CA certificate, and the file name is based on a hash
           value derived from the certificate.  This is achieved by processing a certificate directory with the "c_rehash" utility supplied with
           OpenSSL.  Using --ca-directory is more efficient than --ca-certificate when many certificates are installed because it allows Wget to fetch
           certificates on demand.

           Without this option Wget looks for CA certificates at the system-specified locations, chosen at OpenSSL installation time.

       --crl-file=file
           Specifies a CRL file in file.  This is needed for certificates that have been revocated by the CAs.

       --pinnedpubkey=file/hashes
           Tells wget to use the specified public key file (or hashes) to verify the peer.  This can be a path to a file which contains a single public
           key in PEM or DER format, or any number of base64 encoded sha256 hashes preceded by "sha256//" and separated by ";"

           When negotiating a TLS or SSL connection, the server sends a certificate indicating its identity. A public key is extracted from this
           certificate and if it does not exactly match the public key(s) provided to this option, wget will abort the connection before sending or
           receiving any data.

       --random-file=file
           [OpenSSL and LibreSSL only] Use file as the source of random data for seeding the pseudo-random number generator on systems without
           /dev/urandom.

           On such systems the SSL library needs an external source of randomness to initialize.  Randomness may be provided by EGD (see --egd-file
           below) or read from an external source specified by the user.  If this option is not specified, Wget looks for random data in $RANDFILE or,
           if that is unset, in $HOME/.rnd.

           If you're getting the "Could not seed OpenSSL PRNG; disabling SSL."  error, you should provide random data using some of the methods
           described above.

       --egd-file=file
           [OpenSSL only] Use file as the EGD socket.  EGD stands for Entropy Gathering Daemon, a user-space program that collects data from various
           unpredictable system sources and makes it available to other programs that might need it.  Encryption software, such as the SSL library,
           needs sources of non-repeating randomness to seed the random number generator used to produce cryptographically strong keys.

           OpenSSL allows the user to specify his own source of entropy using the "RAND_FILE" environment variable.  If this variable is unset, or if
           the specified file does not produce enough randomness, OpenSSL will read random data from EGD socket specified using this option.

           If this option is not specified (and the equivalent startup command is not used), EGD is never contacted.  EGD is not needed on modern Unix
           systems that support /dev/urandom.

       --no-hsts
           Wget supports HSTS (HTTP Strict Transport Security, RFC 6797) by default.  Use --no-hsts to make Wget act as a non-HSTS-compliant UA. As a
           consequence, Wget would ignore all the "Strict-Transport-Security" headers, and would not enforce any existing HSTS policy.

       --hsts-file=file
           By default, Wget stores its HSTS database in ~/.wget-hsts.  You can use --hsts-file to override this. Wget will use the supplied file as the
           HSTS database. Such file must conform to the correct HSTS database format used by Wget. If Wget cannot parse the provided file, the behaviour
           is unspecified.

           The Wget's HSTS database is a plain text file. Each line contains an HSTS entry (ie. a site that has issued a "Strict-Transport-Security"
           header and that therefore has specified a concrete HSTS policy to be applied). Lines starting with a dash ("#") are ignored by Wget. Please
           note that in spite of this convenient human-readability hand-hacking the HSTS database is generally not a good idea.

           An HSTS entry line consists of several fields separated by one or more whitespace:

           "<hostname> SP [<port>] SP <include subdomains> SP <created> SP <max-age>"

           The hostname and port fields indicate the hostname and port to which the given HSTS policy applies. The port field may be zero, and it will,
           in most of the cases. That means that the port number will not be taken into account when deciding whether such HSTS policy should be applied
           on a given request (only the hostname will be evaluated). When port is different to zero, both the target hostname and the port will be
           evaluated and the HSTS policy will only be applied if both of them match. This feature has been included for testing/development purposes
           only.  The Wget testsuite (in testenv/) creates HSTS databases with explicit ports with the purpose of ensuring Wget's correct behaviour.
           Applying HSTS policies to ports other than the default ones is discouraged by RFC 6797 (see Appendix B "Differences between HSTS Policy and
           Same-Origin Policy"). Thus, this functionality should not be used in production environments and port will typically be zero. The last three
           fields do what they are expected to. The field include_subdomains can either be 1 or 0 and it signals whether the subdomains of the target
           domain should be part of the given HSTS policy as well. The created and max-age fields hold the timestamp values of when such entry was
           created (first seen by Wget) and the HSTS-defined value 'max-age', which states how long should that HSTS policy remain active, measured in
           seconds elapsed since the timestamp stored in created. Once that time has passed, that HSTS policy will no longer be valid and will
           eventually be removed from the database.

           If you supply your own HSTS database via --hsts-file, be aware that Wget may modify the provided file if any change occurs between the HSTS
           policies requested by the remote servers and those in the file. When Wget exits, it effectively updates the HSTS database by rewriting the
           database file with the new entries.

           If the supplied file does not exist, Wget will create one. This file will contain the new HSTS entries. If no HSTS entries were generated (no
           "Strict-Transport-Security" headers were sent by any of the servers) then no file will be created, not even an empty one. This behaviour
           applies to the default database file (~/.wget-hsts) as well: it will not be created until some server enforces an HSTS policy.

           Care is taken not to override possible changes made by other Wget processes at the same time over the HSTS database. Before dumping the
           updated HSTS entries on the file, Wget will re-read it and merge the changes.

           Using a custom HSTS database and/or modifying an existing one is discouraged.  For more information about the potential security threats
           arose from such practice, see section 14 "Security Considerations" of RFC 6797, specially section 14.9 "Creative Manipulation of HSTS Policy
           Store".

       --warc-file=file
           Use file as the destination WARC file.

       --warc-header=string
           Use string into as the warcinfo record.

       --warc-max-size=size
           Set the maximum size of the WARC files to size.

       --warc-cdx
           Write CDX index files.

       --warc-dedup=file
           Do not store records listed in this CDX file.

       --no-warc-compression
           Do not compress WARC files with GZIP.

       --no-warc-digests
           Do not calculate SHA1 digests.

       --no-warc-keep-log
           Do not store the log file in a WARC record.

       --warc-tempdir=dir
           Specify the location for temporary files created by the WARC writer.

FTP Options

       --ftp-user=user
       --ftp-password=password
           Specify the username user and password password on an FTP server.  Without this, or the corresponding startup option, the password defaults
           to -wget@, normally used for anonymous FTP.

           Another way to specify username and password is in the URL itself.  Either method reveals your password to anyone who bothers to run "ps".
           To prevent the passwords from being seen, store them in .wgetrc or .netrc, and make sure to protect those files from other users with
           "chmod".  If the passwords are really important, do not leave them lying in those files either---edit the files and delete them after Wget
           has started the download.

       --no-remove-listing
           Don't remove the temporary .listing files generated by FTP retrievals.  Normally, these files contain the raw directory listings received
           from FTP servers.  Not removing them can be useful for debugging purposes, or when you want to be able to easily check on the contents of
           remote server directories (e.g. to verify that a mirror you're running is complete).

           Note that even though Wget writes to a known filename for this file, this is not a security hole in the scenario of a user making .listing a
           symbolic link to /etc/passwd or something and asking "root" to run Wget in his or her directory.  Depending on the options used, either Wget
           will refuse to write to .listing, making the globbing/recursion/time-stamping operation fail, or the symbolic link will be deleted and
           replaced with the actual .listing file, or the listing will be written to a .listing.number file.

           Even though this situation isn't a problem, though, "root" should never run Wget in a non-trusted user's directory.  A user could do
           something as simple as linking index.html to /etc/passwd and asking "root" to run Wget with -N or -r so the file will be overwritten.

       --no-glob
           Turn off FTP globbing.  Globbing refers to the use of shell-like special characters (wildcards), like *, ?, [ and ] to retrieve more than one
           file from the same directory at once, like:

                   wget ftp://gnjilux.srk.fer.hr/*.msg

           By default, globbing will be turned on if the URL contains a globbing character.  This option may be used to turn globbing on or off
           permanently.

           You may have to quote the URL to protect it from being expanded by your shell.  Globbing makes Wget look for a directory listing, which is
           system-specific.  This is why it currently works only with Unix FTP servers (and the ones emulating Unix "ls" output).

       --no-passive-ftp
           Disable the use of the passive FTP transfer mode.  Passive FTP mandates that the client connect to the server to establish the data
           connection rather than the other way around.

           If the machine is connected to the Internet directly, both passive and active FTP should work equally well.  Behind most firewall and NAT
           configurations passive FTP has a better chance of working.  However, in some rare firewall configurations, active FTP actually works when
           passive FTP doesn't.  If you suspect this to be the case, use this option, or set "passive_ftp=off" in your init file.

       --preserve-permissions
           Preserve remote file permissions instead of permissions set by umask.

       --retr-symlinks
           By default, when retrieving FTP directories recursively and a symbolic link is encountered, the symbolic link is traversed and the pointed-to
           files are retrieved.  Currently, Wget does not traverse symbolic links to directories to download them recursively, though this feature may
           be added in the future.

           When --retr-symlinks=no is specified, the linked-to file is not downloaded.  Instead, a matching symbolic link is created on the local
           filesystem.  The pointed-to file will not be retrieved unless this recursive retrieval would have encountered it separately and downloaded it
           anyway.  This option poses a security risk where a malicious FTP Server may cause Wget to write to files outside of the intended directories
           through a specially crafted .LISTING file.

           Note that when retrieving a file (not a directory) because it was specified on the command-line, rather than because it was recursed to, this
           option has no effect.  Symbolic links are always traversed in this case.

FTPS Options

       --ftps-implicit
           This option tells Wget to use FTPS implicitly. Implicit FTPS consists of initializing SSL/TLS from the very beginning of the control
           connection. This option does not send an "AUTH TLS" command: it assumes the server speaks FTPS and directly starts an SSL/TLS connection. If
           the attempt is successful, the session continues just like regular FTPS ("PBSZ" and "PROT" are sent, etc.).  Implicit FTPS is no longer a
           requirement for FTPS implementations, and thus many servers may not support it. If --ftps-implicit is passed and no explicit port number
           specified, the default port for implicit FTPS, 990, will be used, instead of the default port for the "normal" (explicit) FTPS which is the
           same as that of FTP, 21.

       --no-ftps-resume-ssl
           Do not resume the SSL/TLS session in the data channel. When starting a data connection, Wget tries to resume the SSL/TLS session previously
           started in the control connection.  SSL/TLS session resumption avoids performing an entirely new handshake by reusing the SSL/TLS parameters
           of a previous session. Typically, the FTPS servers want it that way, so Wget does this by default. Under rare circumstances however, one
           might want to start an entirely new SSL/TLS session in every data connection.  This is what --no-ftps-resume-ssl is for.

       --ftps-clear-data-connection
           All the data connections will be in plain text. Only the control connection will be under SSL/TLS. Wget will send a "PROT C" command to
           achieve this, which must be approved by the server.

       --ftps-fallback-to-ftp
           Fall back to FTP if FTPS is not supported by the target server. For security reasons, this option is not asserted by default. The default
           behaviour is to exit with an error.  If a server does not successfully reply to the initial "AUTH TLS" command, or in the case of implicit
           FTPS, if the initial SSL/TLS connection attempt is rejected, it is considered that such server does not support FTPS.

Recursive Retrieval Options

       -r
       --recursive
           Turn on recursive retrieving.    The default maximum depth is 5.

       -l depth
       --level=depth
           Set the maximum number of subdirectories that Wget will recurse into to depth.  In order to prevent one from accidentally downloading very
           large websites when using recursion this is limited to a depth of 5 by default, i.e., it will traverse at most 5 directories deep starting
           from the provided URL.  Set -l 0 or -l inf for infinite recursion depth.

                   wget -r -l 0 http://<site>/1.html

           Ideally, one would expect this to download just 1.html.  but unfortunately this is not the case, because -l 0 is equivalent to -l inf---that
           is, infinite recursion.  To download a single HTML page (or a handful of them), specify them all on the command line and leave away -r and
           -l. To download the essential items to view a single HTML page, see page requisites.

       --delete-after
           This option tells Wget to delete every single file it downloads, after having done so.  It is useful for pre-fetching popular pages through a
           proxy, e.g.:

                   wget -r -nd --delete-after http://whatever.com/~popular/page/

           The -r option is to retrieve recursively, and -nd to not create directories.

           Note that --delete-after deletes files on the local machine.  It does not issue the DELE command to remote FTP sites, for instance.  Also
           note that when --delete-after is specified, --convert-links is ignored, so .orig files are simply not created in the first place.

       -k
       --convert-links
           After the download is complete, convert the links in the document to make them suitable for local viewing.  This affects not only the visible
           hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-
           HTML content, etc.

           Each link will be changed in one of the two ways:

           •   The links to files that have been downloaded by Wget will be changed to refer to the file they point to as a relative link.

               Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the link in doc.html will be modified to point
               to ../bar/img.gif.  This kind of transformation works reliably for arbitrary combinations of directories.

           •   The links to files that have not been downloaded by Wget will be changed to include host name and absolute path of the location they
               point to.

               Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link in doc.html will be modified to
               point to http://hostname/bar/img.gif.

           Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not
           downloaded, the link will refer to its full Internet address rather than presenting a broken link.  The fact that the former links are
           converted to relative links ensures that you can move the downloaded hierarchy to another directory.

           Note that only at the end of the download can Wget know which links have been downloaded.  Because of that, the work done by -k will be
           performed at the end of all the downloads.

       --convert-file-only
           This option converts only the filename part of the URLs, leaving the rest of the URLs untouched. This filename part is sometimes referred to
           as the "basename", although we avoid that term here in order not to cause confusion.

           It works particularly well in conjunction with --adjust-extension, although this coupling is not enforced. It proves useful to populate
           Internet caches with files downloaded from different hosts.

           Example: if some link points to //foo.com/bar.cgi?xyz with --adjust-extension asserted and its local destination is intended to be
           ./foo.com/bar.cgi?xyz.css, then the link would be converted to //foo.com/bar.cgi?xyz.css. Note that only the filename part has been modified.
           The rest of the URL has been left untouched, including the net path ("//") which would otherwise be processed by Wget and converted to the
           effective scheme (ie. "http://").

       -K
       --backup-converted
           When converting a file, back up the original version with a .orig suffix.  Affects the behavior of -N.

       -m
       --mirror
           Turn on options suitable for mirroring.  This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP
           directory listings.  It is currently equivalent to -r -N -l inf --no-remove-listing.

       -p
       --page-requisites
           This option causes Wget to download all the files that are necessary to properly display a given HTML page.  This includes such things as
           inlined images, sounds, and referenced stylesheets.

           Ordinarily, when downloading a single HTML page, any requisite documents that may be needed to display it properly are not downloaded.  Using
           -r together with -l can help, but since Wget does not ordinarily distinguish between external and inlined documents, one is generally left
           with "leaf documents" that are missing their requisites.

           For instance, say document 1.html contains an "<IMG>" tag referencing 1.gif and an "<A>" tag pointing to external document 2.html.  Say that
           2.html is similar but that its image is 2.gif and it links to 3.html.  Say this continues up to some arbitrarily high number.

           If one executes the command:

                   wget -r -l 2 http://<site>/1.html

           then 1.html, 1.gif, 2.html, 2.gif, and 3.html will be downloaded.  As you can see, 3.html is without its requisite 3.gif because Wget is
           simply counting the number of hops (up to 2) away from 1.html in order to determine where to stop the recursion.  However, with this command:

                   wget -r -l 2 -p http://<site>/1.html

           all the above files and 3.html's requisite 3.gif will be downloaded.  Similarly,

                   wget -r -l 1 -p http://<site>/1.html

           will cause 1.html, 1.gif, 2.html, and 2.gif to be downloaded.  One might think that:

                   wget -r -l 0 -p http://<site>/1.html

           would download just 1.html and 1.gif, but unfortunately this is not the case, because -l 0 is equivalent to -l inf---that is, infinite
           recursion.  To download a single HTML page (or a handful of them, all specified on the command-line or in a -i URL input file) and its (or
           their) requisites, simply leave off -r and -l:

                   wget -p http://<site>/1.html

           Note that Wget will behave as if -r had been specified, but only that single page and its requisites will be downloaded.  Links from that
           page to external documents will not be followed.  Actually, to download a single page and all its requisites (even if they exist on separate
           websites), and make sure the lot displays properly locally, this author likes to use a few options in addition to -p:

                   wget -E -H -k -K -p http://<site>/<document>;

           To finish off this topic, it's worth knowing that Wget's idea of an external document link is any URL specified in an "<A>" tag, an "<AREA>"
           tag, or a "<LINK>" tag other than "<LINK REL="stylesheet">".

       --strict-comments
           Turn on strict parsing of HTML comments.  The default is to terminate comments at the first occurrence of -->.

           According to specifications, HTML comments are expressed as SGML declarations.  Declaration is special markup that begins with <! and ends
           with >, such as <!DOCTYPE ...>, that may contain comments between a pair of -- delimiters.  HTML comments are "empty declarations", SGML
           declarations without any non-comment text.  Therefore, <!--foo--> is a valid comment, and so is <!--one-- --two-->, but <!--1--2--> is not.

           On the other hand, most HTML writers don't perceive comments as anything other than text delimited with <!-- and -->, which is not quite the
           same.  For example, something like <!------------> works as a valid comment as long as the number of dashes is a multiple of four (!).  If
           not, the comment technically lasts until the next --, which may be at the other end of the document.  Because of this, many popular browsers
           completely ignore the specification and implement what users have come to expect: comments delimited with <!-- and -->.

           Until version 1.9, Wget interpreted comments strictly, which resulted in missing links in many web pages that displayed fine in browsers, but
           had the misfortune of containing non-compliant comments.  Beginning with version 1.9, Wget has joined the ranks of clients that implements
           "naive" comments, terminating each comment at the first occurrence of -->.

           If, for whatever reason, you want strict comment parsing, use this option to turn it on.

Recursive Accept/Reject Options

       -A acclist --accept acclist
       -R rejlist --reject rejlist
           Specify comma-separated lists of file name suffixes or patterns to accept or reject. Note that if any of the wildcard characters, , ?, [ or
           ], appear in an element of acclist or rejlist, it will be treated as a pattern, rather than a suffix.  In this case, you have to enclose the
           pattern into quotes to prevent your shell from expanding it, like in -A ".mp3" or -A '*.mp3'.

       --accept-regex urlregex
       --reject-regex urlregex
           Specify a regular expression to accept or reject the complete URL.

       --regex-type regextype
           Specify the regular expression type.  Possible types are posix or pcre.  Note that to be able to use pcre type, wget has to be compiled with
           libpcre support.

       -D domain-list
       --domains=domain-list
           Set domains to be followed.  domain-list is a comma-separated list of domains.  Note that it does not turn on -H.

       --exclude-domains domain-list
           Specify the domains that are not to be followed.

       --follow-ftp
           Follow FTP links from HTML documents.  Without this option, Wget will ignore all the FTP links.

       --follow-tags=list
           Wget has an internal table of HTML tag / attribute pairs that it considers when looking for linked documents during a recursive retrieval.
           If a user wants only a subset of those tags to be considered, however, he or she should be specify such tags in a comma-separated list with
           this option.

       --ignore-tags=list
           This is the opposite of the --follow-tags option.  To skip certain HTML tags when recursively looking for documents to download, specify them
           in a comma-separated list.

           In the past, this option was the best bet for downloading a single page and its requisites, using a command-line like:

                   wget --ignore-tags=a,area -H -k -K -r http://<site>/<document>;

           However, the author of this option came across a page with tags like "<LINK REL="home" HREF="/">" and came to the realization that specifying
           tags to ignore was not enough.  One can't just tell Wget to ignore "<LINK>", because then stylesheets will not be downloaded.  Now the best
           bet for downloading a single page and its requisites is the dedicated --page-requisites option.

       --ignore-case
           Ignore case when matching files and directories.  This influences the behavior of -R, -A, -I, and -X options, as well as globbing implemented
           when downloading from FTP sites.  For example, with this option, -A "*.txt" will match file1.txt, but also file2.TXT, file3.TxT, and so on.
           The quotes in the example are to prevent the shell from expanding the pattern.

       -H
       --span-hosts
           Enable spanning across hosts when doing recursive retrieving.

       -L
       --relative
           Follow relative links only.  Useful for retrieving a specific home page without any distractions, not even those from the same hosts.

       -I list
       --include-directories=list
           Specify a comma-separated list of directories you wish to follow when downloading.  Elements of list may contain wildcards.

       -X list
       --exclude-directories=list
           Specify a comma-separated list of directories you wish to exclude from download.  Elements of list may contain wildcards.

       -np
       --no-parent
           Do not ever ascend to the parent directory when retrieving recursively.  This is a useful option, since it guarantees that only the files
           below a certain hierarchy will be downloaded.

ENVIRONMENT

       Wget supports proxies for both HTTP and FTP retrievals.  The standard way to specify proxy location, which Wget recognizes, is using the
       following environment variables:

       http_proxy
       https_proxy
           If set, the http_proxy and https_proxy variables should contain the URLs of the proxies for HTTP and HTTPS connections respectively.

       ftp_proxy
           This variable should contain the URL of the proxy for FTP connections.  It is quite common that http_proxy and ftp_proxy are set to the same
           URL.

       no_proxy
           This variable should contain a comma-separated list of domain extensions proxy should not be used for.  For instance, if the value of
           no_proxy is .mit.edu, proxy will not be used to retrieve documents from MIT.

EXIT STATUS

Wget may return one of several error codes if it encounters problems.

0 No problems occurred.

1 Generic error code.

2 Parse error—for instance, when parsing command-line options, the .wgetrc or .netrc…

3 File I/O error.

4 Network failure.

5 SSL verification failure.

6 Username/password authentication failure.

7 Protocol errors.

8 Server issued an error response.

       With the exceptions of 0 and 1, the lower-numbered exit codes take precedence over higher-numbered ones, when multiple types of errors are
       encountered.

       In versions of Wget prior to 1.12, Wget's exit status tended to be unhelpful and inconsistent. Recursive downloads would virtually always return
       0 (success), regardless of any issues encountered, and non-recursive fetches only returned the status corresponding to the most recently-
       attempted download.

FILES

       /etc/wgetrc
           Default location of the global startup file.

       .wgetrc
           User startup file.

BUGS

       You are welcome to submit bug reports via the GNU Wget bug tracker (see <https://savannah.gnu.org/bugs/?func=additem&group=wget>) or to our
       mailing list <bug-wget@gnu.org>.

       Visit <https://lists.gnu.org/mailman/listinfo/bug-wget> to get more info (how to subscribe, list archives, ...).

       Before actually submitting a bug report, please try to follow a few simple guidelines.

       1.  Please try to ascertain that the behavior you see really is a bug.  If Wget crashes, it's a bug.  If Wget does not behave as documented, it's
           a bug.  If things work strange, but you are not sure about the way they are supposed to work, it might well be a bug, but you might want to
           double-check the documentation and the mailing lists.

       2.  Try to repeat the bug in as simple circumstances as possible.  E.g. if Wget crashes while downloading wget -rl0 -kKE -t5 --no-proxy
           http://example.com -o /tmp/log, you should try to see if the crash is repeatable, and if will occur with a simpler set of options.  You might
           even try to start the download at the page where the crash occurred to see if that page somehow triggered the crash.

           Also, while I will probably be interested to know the contents of your .wgetrc file, just dumping it into the debug message is probably a bad
           idea.  Instead, you should first try to see if the bug repeats with .wgetrc moved out of the way.  Only if it turns out that .wgetrc settings
           affect the bug, mail me the relevant parts of the file.

       3.  Please start Wget with -d option and send us the resulting output (or relevant parts thereof).  If Wget was compiled without debug support,
           recompile it---it is much easier to trace bugs with debug support on.

           Note: please make sure to remove any potentially sensitive information from the debug log before sending it to the bug address.  The "-d"
           won't go out of its way to collect sensitive information, but the log will contain a fairly complete transcript of Wget's communication with
           the server, which may include passwords and pieces of downloaded data.  Since the bug address is publicly archived, you may assume that all
           bug reports are visible to the public.

       4.  If Wget has crashed, try to run it in a debugger, e.g. "gdb `which wget` core" and type "where" to get the backtrace.  This may not work if
           the system administrator has disabled core files, but it is safe to try.

SEE ALSO

       This is not the complete manual for GNU Wget.  For more complete information, including more detailed explanations of some of the options, and a
       number of commands available for use with .wgetrc files and the -e option, see the GNU Info entry for wget.

       Also see wget2(1), the updated version of GNU Wget with even better support for recursive downloading and modern protocols like HTTP/2.

AUTHOR

       Originally written by Hrvoje Nikšić <hniksic@xemacs.org>.  Currently maintained by Darshit Shah <darnir@gnu.org> and Tim Rühsen
       <tim.ruehsen@gmx.de>.

COPYRIGHT

Copyright (c) 1996-2011, 2015, 2018-2020 Free Software Foundation, Inc.

       Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any
       later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts.  A
       copy of the license is included in the section entitled "GNU Free Documentation License".

GNU Wget 1.21                                                          2021-11-23                                                                WGET(1)
Categories
Linux manpage

manpage tail

TAIL(1) User Commands TAIL(1)

NAME

tail – output the last part of files

SYNOPSIS

tail [OPTION]… [FILE]…

DESCRIPTION

Print the last 10 lines of each FILE to standard output. With more than one FILE, precede each with a header giving the file name.

With no FILE, or when FILE is -, read standard input.

Mandatory arguments to long options are mandatory for short options too.

       -c, --bytes=[+]NUM
              output the last NUM bytes; or use -c +NUM to output starting with byte NUM of each file

       -f, --follow[={name|descriptor}]
              output appended data as the file grows;

              an absent option argument means 'descriptor'

       -F     same as --follow=name --retry

       -n, --lines=[+]NUM
              output the last NUM lines, instead of the last 10; or use -n +NUM to output starting with line NUM

       --max-unchanged-stats=N
              with --follow=name, reopen a FILE which has not

              changed  size after N (default 5) iterations to see if it has been unlinked or renamed (this is the usual case of rotated log files); with
              inotify, this option is rarely useful

       --pid=PID
              with -f, terminate after process ID, PID dies

       -q, --quiet, --silent
              never output headers giving file names

       --retry
              keep trying to open a file if it is inaccessible

       -s, --sleep-interval=N
              with -f, sleep for approximately N seconds (default 1.0) between iterations; with inotify and --pid=P, check process P at least once every
              N seconds

       -v, --verbose
              always output headers giving file names

       -z, --zero-terminated
              line delimiter is NUL, not newline

       --help display this help and exit

       --version
              output version information and exit

       NUM  may have a multiplier suffix: b 512, kB 1000, K 1024, MB 10001000, M 10241024, GB 100010001000, G 102410241024, and so on for T, P, E,
       Z, Y.  Binary prefixes can be used, too: KiB=K, MiB=M, and so on.

       With --follow (-f), tail defaults to following the file descriptor, which means that even if a tail'ed file is renamed,  tail  will  continue  to
       track  its end.  This default behavior is not desirable when you really want to track the actual name of the file, not the file descriptor (e.g.,
       log rotation).  Use --follow=name in that case.  That causes tail to track the named file in a way that accommodates renaming, removal  and  creâАР
       ation.

AUTHOR

Written by Paul Rubin, David MacKenzie, Ian Lance Taylor, and Jim Meyering.

REPORTING BUGS

       GNU coreutils online help: <https://www.gnu.org/software/coreutils/>
       Report any translation bugs to <https://translationproject.org/team/>

COPYRIGHT

       Copyright © 2020 Free Software Foundation, Inc.  License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
       This is free software: you are free to change and redistribute it.  There is NO WARRANTY, to the extent permitted by law.

SEE ALSO

head(1)

       Full documentation <https://www.gnu.org/software/coreutils/tail>
       or available locally via: info '(coreutils) tail invocation'

GNU coreutils 8.32                                                   September 2020                                                              TAIL(1)
Categories
Linux manpage

manpage sudo

SUDO(8) BSD System Manager’s Manual SUDO(8)

NAME

sudo, sudoedit âАФ execute a command as another user

SYNOPSIS

     sudo -h | -K | -k | -V
     sudo -v [-ABknS] [-g group] [-h host] [-p prompt] [-u user]
     sudo -l [-ABknS] [-g group] [-h host] [-p prompt] [-U user] [-u user] [command]
     sudo [-ABbEHnPS] [-C num] [-D directory] [-g group] [-h host] [-p prompt] [-R directory] [-r role] [-t type] [-T timeout] [-u user] [VAR=value]
          [-i | -s] [command]
     sudoedit [-ABknS] [-C num] [-D directory] [-g group] [-h host] [-p prompt] [-R directory] [-r role] [-t type] [-T timeout] [-u user] file ...

DESCRIPTION

     sudo allows a permitted user to execute a command as the superuser or another user, as specified by the security policy.  The invoking user's real
     (not effective) user-ID is used to determine the user name with which to query the security policy.

     sudo supports a plugin architecture for security policies and input/output logging.  Third parties can develop and distribute their own policy and
     I/O logging plugins to work seamlessly with the sudo front end.  The default security policy is sudoers, which is configured via the file
     /etc/sudoers, or via LDAP.  See the Plugins section for more information.

     The security policy determines what privileges, if any, a user has to run sudo.  The policy may require that users authenticate themselves with a
     password or another authentication mechanism.  If authentication is required, sudo will exit if the user's password is not entered within a configâАР
     urable time limit.  This limit is policy-specific; the default password prompt timeout for the sudoers security policy is 0 minutes.

     Security policies may support credential caching to allow the user to run sudo again for a period of time without requiring authentication.  By deâАР
     fault, the sudoers policy caches credentials on a per-terminal basis for 15 minutes.  See the timestamp_type and timestamp_timeout options in
     sudoers(5) for more information.  By running sudo with the -v option, a user can update the cached credentials without running a command.

     On systems where sudo is the primary method of gaining superuser privileges, it is imperative to avoid syntax errors in the security policy configâАР
     uration files.  For the default security policy, sudoers(5), changes to the configuration files should be made using the visudo(8) utility which
     will ensure that no syntax errors are introduced.

     When invoked as sudoedit, the -e option (described below), is implied.

     Security policies may log successful and failed attempts to use sudo.  If an I/O plugin is configured, the running command's input and output may
     be logged as well.

     The options are as follows:

     -A, --askpass
                 Normally, if sudo requires a password, it will read it from the user's terminal.  If the -A (askpass) option is specified, a (possibly
                 graphical) helper program is executed to read the user's password and output the password to the standard output.  If the SUDO_ASKPASS
                 environment variable is set, it specifies the path to the helper program.  Otherwise, if sudo.conf(5) contains a line specifying the
                 askpass program, that value will be used.  For example:

                     # Path to askpass helper program
                     Path askpass /usr/X11R6/bin/ssh-askpass

                 If no askpass program is available, sudo will exit with an error.

     -B, --bell  Ring the bell as part of the password promp when a terminal is present.  This option has no effect if an askpass program is used.

     -b, --background
                 Run the given command in the background.  Note that it is not possible to use shell job control to manipulate background processes
                 started by sudo.  Most interactive commands will fail to work properly in background mode.

     -C num, --close-from=num
                 Close all file descriptors greater than or equal to num before executing a command.  Values less than three are not permitted.  By deâАР
                 fault, sudo will close all open file descriptors other than standard input, standard output and standard error when executing a comâАР
                 mand.  The security policy may restrict the user's ability to use this option.  The sudoers policy only permits use of the -C option
                 when the administrator has enabled the closefrom_override option.

     -D directory, --chdir=directory
                 Run the command in the specified directory instead of the current working directory.  The security policy may return an error if the
                 user does not have permission to specify the working directory.

     -E, --preserve-env
                 Indicates to the security policy that the user wishes to preserve their existing environment variables.  The security policy may return
                 an error if the user does not have permission to preserve the environment.

     --preserve-env=list
                 Indicates to the security policy that the user wishes to add the comma-separated list of environment variables to those preserved from
                 the user's environment.  The security policy may return an error if the user does not have permission to preserve the environment.
                 This option may be specified multiple times.

     -e, --edit  Edit one or more files instead of running a command.  In lieu of a path name, the string "sudoedit" is used when consulting the secuâАР
                 rity policy.  If the user is authorized by the policy, the following steps are taken:

                 1.   Temporary copies are made of the files to be edited with the owner set to the invoking user.

                 2.   The editor specified by the policy is run to edit the temporary files.  The sudoers policy uses the SUDO_EDITOR, VISUAL and EDITOR
                      environment variables (in that order).  If none of SUDO_EDITOR, VISUAL or EDITOR are set, the first program listed in the editor
                      sudoers(5) option is used.

                 3.   If they have been modified, the temporary files are copied back to their original location and the temporary versions are removed.

                 To help prevent the editing of unauthorized files, the following restrictions are enforced unless explicitly allowed by the security
                 policy:

                 âА¢  Symbolic links may not be edited (version 1.8.15 and higher).

                 âА¢  Symbolic links along the path to be edited are not followed when the parent directory is writable by the invoking user unless that
                    user is root (version 1.8.16 and higher).

                 âА¢  Files located in a directory that is writable by the invoking user may not be edited unless that user is root (version 1.8.16 and
                    higher).

                 Users are never allowed to edit device special files.

                 If the specified file does not exist, it will be created.  Note that unlike most commands run by sudo, the editor is run with the inâАР
                 voking user's environment unmodified.  If the temporary file becomes empty after editing, the user will be prompted before it is inâАР
                 stalled.  If, for some reason, sudo is unable to update a file with its edited version, the user will receive a warning and the edited
                 copy will remain in a temporary file.

     -g group, --group=group
                 Run the command with the primary group set to group instead of the primary group specified by the target user's password database enâАР
                 try.  The group may be either a group name or a numeric group-ID (GID) prefixed with the âА~#âАTM character (e.g., #0 for GID 0).  When runâАР
                 ning a command as a GID, many shells require that the âА~#âАTM be escaped with a backslash (âА~\âАTM).  If no -u option is specified, the command
                 will be run as the invoking user.  In either case, the primary group will be set to group.  The sudoers policy permits any of the tarâАР
                 get user's groups to be specified via the -g option as long as the -P option is not in use.

     -H, --set-home
                 Request that the security policy set the HOME environment variable to the home directory specified by the target user's password dataâАР
                 base entry.  Depending on the policy, this may be the default behavior.

     -h, --help  Display a short help message to the standard output and exit.

     -h host, --host=host
                 Run the command on the specified host if the security policy plugin supports remote commands.  Note that the sudoers plugin does not
                 currently support running remote commands.  This may also be used in conjunction with the -l option to list a user's privileges for the
                 remote host.

     -i, --login
                 Run the shell specified by the target user's password database entry as a login shell.  This means that login-specific resource files
                 such as .profile, .bash_profile or .login will be read by the shell.  If a command is specified, it is passed to the shell for execuâАР
                 tion via the shell's -c option.  If no command is specified, an interactive shell is executed.  sudo attempts to change to that user's
                 home directory before running the shell.  The command is run with an environment similar to the one a user would receive at log in.
                 Note that most shells behave differently when a command is specified as compared to an interactive session; consult the shell's manual
                 for details.  The Command environment section in the sudoers(5) manual documents how the -i option affects the environment in which a
                 command is run when the sudoers policy is in use.

     -K, --remove-timestamp
                 Similar to the -k option, except that it removes the user's cached credentials entirely and may not be used in conjunction with a comâАР
                 mand or other option.  This option does not require a password.  Not all security policies support credential caching.

     -k, --reset-timestamp
                 When used without a command, invalidates the user's cached credentials.  In other words, the next time sudo is run a password will be
                 required.  This option does not require a password and was added to allow a user to revoke sudo permissions from a .logout file.

                 When used in conjunction with a command or an option that may require a password, this option will cause sudo to ignore the user's
                 cached credentials.  As a result, sudo will prompt for a password (if one is required by the security policy) and will not update the
                 user's cached credentials.

                 Not all security policies support credential caching.

     -l, --list  If no command is specified, list the allowed (and forbidden) commands for the invoking user (or the user specified by the -U option) on
                 the current host.  A longer list format is used if this option is specified multiple times and the security policy supports a verbose
                 output format.

                 If a command is specified and is permitted by the security policy, the fully-qualified path to the command is displayed along with any
                 command line arguments.  If a command is specified but not allowed by the policy, sudo will exit with a status value of 1.

     -n, --non-interactive
                 Avoid prompting the user for input of any kind.  If a password is required for the command to run, sudo will display an error message
                 and exit.

     -P, --preserve-groups
                 Preserve the invoking user's group vector unaltered.  By default, the sudoers policy will initialize the group vector to the list of
                 groups the target user is a member of.  The real and effective group-IDs, however, are still set to match the target user.

     -p prompt, --prompt=prompt
                 Use a custom password prompt with optional escape sequences.  The following percent (âА~%âАTM) escape sequences are supported by the sudoers
                 policy:

                 %H  expanded to the host name including the domain name (on if the machine's host name is fully qualified or the fqdn option is set in
                     sudoers(5))

                 %h  expanded to the local host name without the domain name

                 %p  expanded to the name of the user whose password is being requested (respects the rootpw, targetpw, and runaspw flags in sudoers(5))

                 %U  expanded to the login name of the user the command will be run as (defaults to root unless the -u option is also specified)

                 %u  expanded to the invoking user's login name

                 %%  two consecutive âА~%âАTM characters are collapsed into a single âА~%âАTM character

                 The custom prompt will override the default prompt specified by either the security policy or the SUDO_PROMPT environment variable.  On
                 systems that use PAM, the custom prompt will also override the prompt specified by a PAM module unless the passprompt_override flag is
                 disabled in sudoers.

     -R directory, --chroot=directory
                 Change to the specified root directory (see chroot(8)) before running the command.  The security policy may return an error if the user
                 does not have permission to specify the root directory.

     -r role, --role=role
                 Run the command with an SELinux security context that includes the specified role.

     -S, --stdin
                 Write the prompt to the standard error and read the password from the standard input instead of using the terminal device.

     -s, --shell
                 Run the shell specified by the SHELL environment variable if it is set or the shell specified by the invoking user's password database
                 entry.  If a command is specified, it is passed to the shell for execution via the shell's -c option.  If no command is specified, an
                 interactive shell is executed.  Note that most shells behave differently when a command is specified as compared to an interactive sesâАР
                 sion; consult the shell's manual for details.

     -t type, --type=type
                 Run the command with an SELinux security context that includes the specified type.  If no type is specified, the default type is deâАР
                 rived from the role.

     -U user, --other-user=user
                 Used in conjunction with the -l option to list the privileges for user instead of for the invoking user.  The security policy may reâАР
                 strict listing other users' privileges.  The sudoers policy only allows root or a user with the ALL privilege on the current host to
                 use this option.

     -T timeout, --command-timeout=timeout
                 Used to set a timeout for the command.  If the timeout expires before the command has exited, the command will be terminated.  The seâАР
                 curity policy may restrict the ability to set command timeouts.  The sudoers policy requires that user-specified timeouts be explicitly
                 enabled.

     -u user, --user=user
                 Run the command as a user other than the default target user (usually root).  The user may be either a user name or a numeric user-ID
                 (UID) prefixed with the âА~#âАTM character (e.g., #0 for UID 0).  When running commands as a UID, many shells require that the âА~#âАTM be esâАР
                 caped with a backslash (âА~\âАTM).  Some security policies may restrict UIDs to those listed in the password database.  The sudoers policy
                 allows UIDs that are not in the password database as long as the targetpw option is not set.  Other security policies may not support
                 this.

     -V, --version
                 Print the sudo version string as well as the version string of the security policy plugin and any I/O plugins.  If the invoking user is
                 already root the -V option will display the arguments passed to configure when sudo was built and plugins may display more verbose inâАР
                 formation such as default options.

     -v, --validate
                 Update the user's cached credentials, authenticating the user if necessary.  For the sudoers plugin, this extends the sudo timeout for
                 another 15 minutes by default, but does not run a command.  Not all security policies support cached credentials.

     --          The -- option indicates that sudo should stop processing command line arguments.

     Options that take a value may only be specified once unless otherwise indicated in the description.  This is to help guard against problems caused
     by poorly written scripts that invoke sudo with user-controlled input.

     Environment variables to be set for the command may also be passed on the command line in the form of VAR=value, e.g.,
     LD_LIBRARY_PATH=/usr/local/pkg/lib.  Variables passed on the command line are subject to restrictions imposed by the security policy plugin.  The
     sudoers policy subjects variables passed on the command line to the same restrictions as normal environment variables with one important exception.
     If the setenv option is set in sudoers, the command to be run has the SETENV tag set or the command matched is ALL, the user may set variables that
     would otherwise be forbidden.  See sudoers(5) for more information.

COMMAND EXECUTION

     When sudo executes a command, the security policy specifies the execution environment for the command.  Typically, the real and effective user and
     group and IDs are set to match those of the target user, as specified in the password database, and the group vector is initialized based on the
     group database (unless the -P option was specified).

     The following parameters may be specified by security policy:

     âА¢  real and effective user-ID

     âА¢  real and effective group-ID

     âА¢  supplementary group-IDs

     âА¢  the environment list

     âА¢  current working directory

     âА¢  file creation mode mask (umask)

     âА¢  SELinux role and type

     âА¢  scheduling priority (aka nice value)

Process model
There are two distinct ways sudo can run a command.

     If an I/O logging plugin is configured or if the security policy explicitly requests it, a new pseudo-terminal (âАЬptyâАЭ) is allocated and fork(2) is
     used to create a second sudo process, referred to as the monitor.  The monitor creates a new terminal session with itself as the leader and the pty
     as its controlling terminal, calls fork(2), sets up the execution environment as described above, and then uses the execve(2) system call to run
     the command in the child process.  The monitor exists to relay job control signals between the user's existing terminal and the pty the command is
     being run in.  This makes it possible to suspend and resume the command.  Without the monitor, the command would be in what POSIX terms an
     âАЬorphaned process groupâАЭ and it would not receive any job control signals from the kernel.  When the command exits or is terminated by a signal,
     the monitor passes the command's exit status to the main sudo process and exits.  After receiving the command's exit status, the main sudo passes
     the command's exit status to the security policy's close function and exits.

     If no pty is used, sudo calls fork(2), sets up the execution environment as described above, and uses the execve(2) system call to run the command
     in the child process.  The main sudo process waits until the command has completed, then passes the command's exit status to the security policy's
     close function and exits.  As a special case, if the policy plugin does not define a close function, sudo will execute the command directly instead
     of calling fork(2) first.  The sudoers policy plugin will only define a close function when I/O logging is enabled, a pty is required, or the
     pam_session or pam_setcred options are enabled.  Note that pam_session and pam_setcred are enabled by default on systems using PAM.

     On systems that use PAM, the security policy's close function is responsible for closing the PAM session.  It may also log the command's exit staâАР
     tus.

Signal handling

     When the command is run as a child of the sudo process, sudo will relay signals it receives to the command.  The SIGINT and SIGQUIT signals are
     only relayed when the command is being run in a new pty or when the signal was sent by a user process, not the kernel.  This prevents the command
     from receiving SIGINT twice each time the user enters control-C.  Some signals, such as SIGSTOP and SIGKILL, cannot be caught and thus will not be
     relayed to the command.  As a general rule, SIGTSTP should be used instead of SIGSTOP when you wish to suspend a command being run by sudo.

     As a special case, sudo will not relay signals that were sent by the command it is running.  This prevents the command from accidentally killing
     itself.  On some systems, the reboot(8) command sends SIGTERM to all non-system processes other than itself before rebooting the system.  This preâАР
     vents sudo from relaying the SIGTERM signal it received back to reboot(8), which might then exit before the system was actually rebooted, leaving
     it in a half-dead state similar to single user mode.  Note, however, that this check only applies to the command run by sudo and not any other proâАР
     cesses that the command may create.  As a result, running a script that calls reboot(8) or shutdown(8) via sudo may cause the system to end up in
     this undefined state unless the reboot(8) or shutdown(8) are run using the exec() family of functions instead of system() (which interposes a shell
     between the command and the calling process).

     If no I/O logging plugins are loaded and the policy plugin has not defined a close() function, set a command timeout or required that the command
     be run in a new pty, sudo may execute the command directly instead of running it as a child process.

Plugins

     Plugins may be specified via Plugin directives in the sudo.conf(5) file.  They may be loaded as dynamic shared objects (on systems that support
     them), or compiled directly into the sudo binary.  If no sudo.conf(5) file is present, or if it doesn't contain any Plugin lines, sudo will use
     sudoers(5) for the policy, auditing and I/O logging plugins.  See the sudo.conf(5) manual for details of the /etc/sudo.conf file and the
     sudo_plugin(5) manual for more information about the sudo plugin architecture.

EXIT VALUE

     Upon successful execution of a command, the exit status from sudo will be the exit status of the program that was executed.  If the command termiâАР
     nated due to receipt of a signal, sudo will send itself the same signal that terminated the command.

     If the -l option was specified without a command, sudo will exit with a value of 0 if the user is allowed to run sudo and they authenticated sucâАР
     cessfully (as required by the security policy).  If a command is specified with the -l option, the exit value will only be 0 if the command is perâАР
     mitted by the security policy, otherwise it will be 1.

     If there is an authentication failure, a configuration/permission problem or if the given command cannot be executed, sudo exits with a value of 1.
     In the latter case, the error string is printed to the standard error.  If sudo cannot stat(2) one or more entries in the user's PATH, an error is
     printed to the standard error.  (If the directory does not exist or if it is not really a directory, the entry is ignored and no error is printed.)
     This should not happen under normal circumstances.  The most common reason for stat(2) to return âАЬpermission deniedâАЭ is if you are running an autoâАР
     mounter and one of the directories in your PATH is on a machine that is currently unreachable.

SECURITY NOTES

sudo tries to be safe when executing external commands.

     To prevent command spoofing, sudo checks "." and "" (both denoting current directory) last when searching for a command in the user's PATH (if one
     or both are in the PATH).  Note, however, that the actual PATH environment variable is not modified and is passed unchanged to the program that
     sudo executes.

     Users should never be granted sudo privileges to execute files that are writable by the user or that reside in a directory that is writable by the
     user.  If the user can modify or replace the command there is no way to limit what additional commands they can run.

     Please note that sudo will normally only log the command it explicitly runs.  If a user runs a command such as sudo su or sudo sh, subsequent comâАР
     mands run from that shell are not subject to sudo's security policy.  The same is true for commands that offer shell escapes (including most ediâАР
     tors).  If I/O logging is enabled, subsequent commands will have their input and/or output logged, but there will not be traditional logs for those
     commands.  Because of this, care must be taken when giving users access to commands via sudo to verify that the command does not inadvertently give
     the user an effective root shell.  For more information, please see the Preventing shell escapes section in sudoers(5).

     To prevent the disclosure of potentially sensitive information, sudo disables core dumps by default while it is executing (they are re-enabled for
     the command that is run).  This historical practice dates from a time when most operating systems allowed set-user-ID processes to dump core by deâАР
     fault.  To aid in debugging sudo crashes, you may wish to re-enable core dumps by setting âАЬdisable_coredumpâАЭ to false in the sudo.conf(5) file as
     follows:

           Set disable_coredump false

     See the sudo.conf(5) manual for more information.

ENVIRONMENT

sudo utilizes the following environment variables. The security policy has control over the actual content of the command’s environment.

EDITOR Default editor to use in -e (sudoedit) mode if neither SUDO_EDITOR nor VISUAL is set.

     MAIL             Set to the mail spool of the target user when the -i option is specified or when env_reset is enabled in sudoers (unless MAIL is
                      present in the env_keep list).

     HOME             Set to the home directory of the target user when the -i or -H options are specified, when the -s option is specified and set_home
                      is set in sudoers, when always_set_home is enabled in sudoers, or when env_reset is enabled in sudoers and HOME is not present in
                      the env_keep list.

     LOGNAME          Set to the login name of the target user when the -i option is specified, when the set_logname option is enabled in sudoers or
                      when the env_reset option is enabled in sudoers (unless LOGNAME is present in the env_keep list).

     PATH             May be overridden by the security policy.

     SHELL            Used to determine shell to run with -s option.

     SUDO_ASKPASS     Specifies the path to a helper program used to read the password if no terminal is available or if the -A option is specified.

     SUDO_COMMAND     Set to the command run by sudo, including command line arguments.  The command line arguments are truncated at 4096 characters to
                      prevent a potential execution error.

     SUDO_EDITOR      Default editor to use in -e (sudoedit) mode.

     SUDO_GID         Set to the group-ID of the user who invoked sudo.

     SUDO_PROMPT      Used as the default password prompt unless the -p option was specified.

     SUDO_PS1         If set, PS1 will be set to its value for the program being run.

     SUDO_UID         Set to the user-ID of the user who invoked sudo.

     SUDO_USER        Set to the login name of the user who invoked sudo.

     USER             Set to the same value as LOGNAME, described above.

     VISUAL           Default editor to use in -e (sudoedit) mode if SUDO_EDITOR is not set.

FILES

/etc/sudo.conf sudo front end configuration

EXAMPLES

Note: the following examples assume a properly configured security policy.

To get a file listing of an unreadable directory:

$ sudo ls /usr/local/protected

To list the home directory of user yaz on a machine where the file system holding ~yaz is not exported as root:

$ sudo -u yaz ls ~yaz

To edit the index.html file as user www:

$ sudoedit -u www ~www/htdocs/index.html

To view system logs only accessible to root and users in the adm group:

$ sudo -g adm more /var/log/syslog

To run an editor as jim with a different primary group:

$ sudoedit -u jim -g audio ~jim/sound.txt

To shut down a machine:

$ sudo shutdown -r +15 “quick reboot”

     To make a usage listing of the directories in the /home partition.  Note that this runs the commands in a sub-shell to make the cd and file rediâАР
     rection work.

           $ sudo sh -c "cd /home ; du -s * | sort -rn > USAGE"

DIAGNOSTICS

Error messages produced by sudo include:

     editing files in a writable directory is not permitted
           By default, sudoedit does not permit editing a file when any of the parent directories are writable by the invoking user.  This avoids a race
           condition that could allow the user to overwrite an arbitrary file.  See the sudoedit_checkdir option in sudoers(5) for more information.

     editing symbolic links is not permitted
           By default, sudoedit does not follow symbolic links when opening files.  See the sudoedit_follow option in sudoers(5) for more information.

     effective uid is not 0, is sudo installed setuid root?
           sudo was not run with root privileges.  The sudo binary must be owned by the root user and have the set-user-ID bit set.  Also, it must not
           be located on a file system mounted with the âА~nosuidâАTM option or on an NFS file system that maps uid 0 to an unprivileged uid.

     effective uid is not 0, is sudo on a file system with the 'nosuid' option set or an NFS file system without root privileges?
           sudo was not run with root privileges.  The sudo binary has the proper owner and permissions but it still did not run with root privileges.
           The most common reason for this is that the file system the sudo binary is located on is mounted with the âА~nosuidâАTM option or it is an NFS
           file system that maps uid 0 to an unprivileged uid.

     fatal error, unable to load plugins
           An error occurred while loading or initializing the plugins specified in sudo.conf(5).

     invalid environment variable name
           One or more environment variable names specified via the -E option contained an equal sign (âА~=âАTM).  The arguments to the -E option should be
           environment variable names without an associated value.

     no password was provided
           When sudo tried to read the password, it did not receive any characters.  This may happen if no terminal is available (or the -S option is
           specified) and the standard input has been redirected from /dev/null.

     a terminal is required to read the password
           sudo needs to read the password but there is no mechanism available for it to do so.  A terminal is not present to read the password from,
           sudo has not been configured to read from the standard input, the -S option was not used, and no askpass helper has been specified either via
           the sudo.conf(5) file or the SUDO_ASKPASS environment variable.

     no writable temporary directory found
           sudoedit was unable to find a usable temporary directory in which to store its intermediate files.

     sudo must be owned by uid 0 and have the setuid bit set
           sudo was not run with root privileges.  The sudo binary does not have the correct owner or permissions.  It must be owned by the root user
           and have the set-user-ID bit set.

     sudoedit is not supported on this platform
           It is only possible to run sudoedit on systems that support setting the effective user-ID.

     timed out reading password
           The user did not enter a password before the password timeout (5 minutes by default) expired.

     you do not exist in the passwd database
           Your user-ID does not appear in the system passwd database.

     you may not specify environment variables in edit mode
           It is only possible to specify environment variables when running a command.  When editing a file, the editor is run with the user's environâАР
           ment unmodified.

SEE ALSO

su(1), stat(2), login_cap(3), passwd(5), sudo.conf(5), sudo_plugin(5), sudoers(5), sudoers_timestamp(5), sudoreplay(8), visudo(8)

HISTORY

See the HISTORY file in the sudo distribution (https://www.sudo.ws/history.html) for a brief history of sudo.

AUTHORS

Many people have worked on sudo over the years; this version consists of code written primarily by:

Todd C. Miller

     See the CONTRIBUTORS file in the sudo distribution (https://www.sudo.ws/contributors.html) for an exhaustive list of people who have contributed to
     sudo.

CAVEATS

     There is no easy way to prevent a user from gaining a root shell if that user is allowed to run arbitrary commands via sudo.  Also, many programs
     (such as editors) allow the user to run commands via shell escapes, thus avoiding sudo's checks.  However, on most systems it is possible to preâАР
     vent shell escapes with the sudoers(5) plugin's noexec functionality.

     It is not meaningful to run the cd command directly via sudo, e.g.,

           $ sudo cd /usr/local/protected

     since when the command exits the parent process (your shell) will still be the same.  Please see the EXAMPLES section for more information.

     Running shell scripts via sudo can expose the same kernel bugs that make set-user-ID shell scripts unsafe on some operating systems (if your OS has
     a /dev/fd/ directory, set-user-ID shell scripts are generally safe).

BUGS

If you feel you have found a bug in sudo, please submit a bug report at https://bugzilla.sudo.ws/

SUPPORT

     Limited free support is available via the sudo-users mailing list, see https://www.sudo.ws/mailman/listinfo/sudo-users to subscribe or search the
     archives.

DISCLAIMER

     sudo is provided âАЬAS ISâАЭ and any express or implied warranties, including, but not limited to, the implied warranties of merchantability and fitâАР
     ness for a particular purpose are disclaimed.  See the LICENSE file distributed with sudo or https://www.sudo.ws/license.html for complete details.

Sudo 1.9.5p2                                                        September 1, 2020                                                       Sudo 1.9.5p2
Categories
Linux manpage

manpage strings

STRINGS(1) GNU Development Tools STRINGS(1)

NAME

strings – print the sequences of printable characters in files

SYNOPSIS

       strings [-afovV] [-min-len]
               [-n min-len] [--bytes=min-len]
               [-t radix] [--radix=radix]
               [-e encoding] [--encoding=encoding]
               [-] [--all] [--print-file-name]
               [-T bfdname] [--target=bfdname]
               [-w] [--include-all-whitespace]
               [-s] [--output-separatorsep_string]
               [--help] [--version] file...

DESCRIPTION

       For each file given, GNU strings prints the printable character sequences that are at least 4 characters long (or the number given with the
       options below) and are followed by an unprintable character.

       Depending upon how the strings program was configured it will default to either displaying all the printable sequences that it can find in each
       file, or only those sequences that are in loadable, initialized data sections.  If the file type is unrecognizable, or if strings is reading from
       stdin then it will always display all of the printable sequences that it can find.

       For backwards compatibility any file that occurs after a command-line option of just - will also be scanned in full, regardless of the presence
       of any -d option.

       strings is mainly useful for determining the contents of non-text files.

OPTIONS

       -a
       --all
       -   Scan the whole file, regardless of what sections it contains or whether those sections are loaded or initialized.  Normally this is the
           default behaviour, but strings can be configured so that the -d is the default instead.

           The - option is position dependent and forces strings to perform full scans of any file that is mentioned after the - on the command line,
           even if the -d option has been specified.

       -d
       --data
           Only print strings from initialized, loaded data sections in the file.  This may reduce the amount of garbage in the output, but it also
           exposes the strings program to any security flaws that may be present in the BFD library used to scan and load sections.  Strings can be
           configured so that this option is the default behaviour.  In such cases the -a option can be used to avoid using the BFD library and instead
           just print all of the strings found in the file.

       -f
       --print-file-name
           Print the name of the file before each string.

       --help
           Print a summary of the program usage on the standard output and exit.

       -min-len
       -n min-len
       --bytes=min-len
           Print sequences of characters that are at least min-len characters long, instead of the default 4.

       -o  Like -t o.  Some other versions of strings have -o act like -t d instead.  Since we can not be compatible with both ways, we simply chose
           one.

       -t radix
       --radix=radix
           Print the offset within the file before each string.  The single character argument specifies the radix of the offset---o for octal, x for
           hexadecimal, or d for decimal.

       -e encoding
       --encoding=encoding
           Select the character encoding of the strings that are to be found.  Possible values for encoding are: s = single-7-bit-byte characters
           (ASCII, ISO 8859, etc., default), S = single-8-bit-byte characters, b = 16-bit bigendian, l = 16-bit littleendian, B = 32-bit bigendian, L =
           32-bit littleendian.  Useful for finding wide character strings. (l and b apply to, for example, Unicode UTF-16/UCS-2 encodings).

       -T bfdname
       --target=bfdname
           Specify an object code format other than your system's default format.

       -v
       -V
       --version
           Print the program version number on the standard output and exit.

       -w
       --include-all-whitespace
           By default tab and space characters are included in the strings that are displayed, but other whitespace characters, such a newlines and
           carriage returns, are not.  The -w option changes this so that all whitespace characters are considered to be part of a string.

       -s
       --output-separator
           By default, output strings are delimited by a new-line. This option allows you to supply any string to be used as the output record
           separator.  Useful with --include-all-whitespace where strings may contain new-lines internally.

       @file
           Read command-line options from file.  The options read are inserted in place of the original @file option.  If file does not exist, or cannot
           be read, then the option will be treated literally, and not removed.

           Options in file are separated by whitespace.  A whitespace character may be included in an option by surrounding the entire option in either
           single or double quotes.  Any character (including a backslash) may be included by prefixing the character to be included with a backslash.
           The file may itself contain additional @file options; any such options will be processed recursively.

SEE ALSO

ar(1), nm(1), objdump(1), ranlib(1), readelf(1) and the Info entries for binutils.

COPYRIGHT

Copyright (c) 1991-2020 Free Software Foundation, Inc.

       Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any
       later version published by the Free Software Foundation; with no Invariant Sections, with no Front-Cover Texts, and with no Back-Cover Texts.  A
       copy of the license is included in the section entitled "GNU Free Documentation License".

binutils-2.35.2                                                        2021-02-20                                                             STRINGS(1)
Categories
Linux manpage

manpage ssh

SSH(1) BSD General Commands Manual SSH(1)

NAME

ssh âАФ OpenSSH remote login client

SYNOPSIS

     ssh [-46AaCfGgKkMNnqsTtVvXxYy] [-B bind_interface] [-b bind_address] [-c cipher_spec] [-D [bind_address:]port] [-E log_file] [-e escape_char]
         [-F configfile] [-I pkcs11] [-i identity_file] [-J destination] [-L address] [-l login_name] [-m mac_spec] [-O ctl_cmd] [-o option] [-p port]
         [-Q query_option] [-R address] [-S ctl_path] [-W host:port] [-w local_tun[:remote_tun]] destination [command]

DESCRIPTION

     ssh (SSH client) is a program for logging into a remote machine and for executing commands on a remote machine.  It is intended to provide secure
     encrypted communications between two untrusted hosts over an insecure network.  X11 connections, arbitrary TCP ports and UNIX-domain sockets can
     also be forwarded over the secure channel.

     ssh connects and logs into the specified destination, which may be specified as either [user@]hostname or a URI of the form
     ssh://[user@]hostname[:port].  The user must prove his/her identity to the remote machine using one of several methods (see below).

     If a command is specified, it is executed on the remote host instead of a login shell.

     The options are as follows:

     -4      Forces ssh to use IPv4 addresses only.

     -6      Forces ssh to use IPv6 addresses only.

     -A      Enables forwarding of connections from an authentication agent such as ssh-agent(1).  This can also be specified on a per-host basis in a
             configuration file.

             Agent forwarding should be enabled with caution.  Users with the ability to bypass file permissions on the remote host (for the agent's
             UNIX-domain socket) can access the local agent through the forwarded connection.  An attacker cannot obtain key material from the agent,
             however they can perform operations on the keys that enable them to authenticate using the identities loaded into the agent.  A safer alâАР
             ternative may be to use a jump host (see -J).

     -a      Disables forwarding of the authentication agent connection.

     -B bind_interface
             Bind to the address of bind_interface before attempting to connect to the destination host.  This is only useful on systems with more than
             one address.

     -b bind_address
             Use bind_address on the local machine as the source address of the connection.  Only useful on systems with more than one address.

     -C      Requests compression of all data (including stdin, stdout, stderr, and data for forwarded X11, TCP and UNIX-domain connections).  The comâАР
             pression algorithm is the same used by gzip(1).  Compression is desirable on modem lines and other slow connections, but will only slow
             down things on fast networks.  The default value can be set on a host-by-host basis in the configuration files; see the Compression option.

     -c cipher_spec
             Selects the cipher specification for encrypting the session.  cipher_spec is a comma-separated list of ciphers listed in order of preferâАР
             ence.  See the Ciphers keyword in ssh_config(5) for more information.

     -D [bind_address:]port
             Specifies a local âАЬdynamicâАЭ application-level port forwarding.  This works by allocating a socket to listen to port on the local side, opâАР
             tionally bound to the specified bind_address.  Whenever a connection is made to this port, the connection is forwarded over the secure
             channel, and the application protocol is then used to determine where to connect to from the remote machine.  Currently the SOCKS4 and
             SOCKS5 protocols are supported, and ssh will act as a SOCKS server.  Only root can forward privileged ports.  Dynamic port forwardings can
             also be specified in the configuration file.

             IPv6 addresses can be specified by enclosing the address in square brackets.  Only the superuser can forward privileged ports.  By default,
             the local port is bound in accordance with the GatewayPorts setting.  However, an explicit bind_address may be used to bind the connection
             to a specific address.  The bind_address of âАЬlocalhostâАЭ indicates that the listening port be bound for local use only, while an empty adâАР
             dress or âА~*âАTM indicates that the port should be available from all interfaces.

     -E log_file
             Append debug logs to log_file instead of standard error.

     -e escape_char
             Sets the escape character for sessions with a pty (default: âА~~âАTM).  The escape character is only recognized at the beginning of a line.  The
             escape character followed by a dot (âА~.âАTM) closes the connection; followed by control-Z suspends the connection; and followed by itself sends
             the escape character once.  Setting the character to âАЬnoneâАЭ disables any escapes and makes the session fully transparent.

     -F configfile
             Specifies an alternative per-user configuration file.  If a configuration file is given on the command line, the system-wide configuration
             file (/etc/ssh/ssh_config) will be ignored.  The default for the per-user configuration file is ~/.ssh/config.  If set to âАЬnoneâАЭ, no conâАР
             figuration files will be read.

     -f      Requests ssh to go to background just before command execution.  This is useful if ssh is going to ask for passwords or passphrases, but
             the user wants it in the background.  This implies -n.  The recommended way to start X11 programs at a remote site is with something like
             ssh -f host xterm.

             If the ExitOnForwardFailure configuration option is set to âАЬyesâАЭ, then a client started with -f will wait for all remote port forwards to
             be successfully established before placing itself in the background.

     -G      Causes ssh to print its configuration after evaluating Host and Match blocks and exit.

     -g      Allows remote hosts to connect to local forwarded ports.  If used on a multiplexed connection, then this option must be specified on the
             master process.

     -I pkcs11
             Specify the PKCS#11 shared library ssh should use to communicate with a PKCS#11 token providing keys for user authentication.

     -i identity_file
             Selects a file from which the identity (private key) for public key authentication is read.  The default is ~/.ssh/id_dsa, ~/.ssh/id_ecdsa,
             ~/.ssh/id_ecdsa_sk, ~/.ssh/id_ed25519, ~/.ssh/id_ed25519_sk and ~/.ssh/id_rsa.  Identity files may also be specified on a per-host basis in
             the configuration file.  It is possible to have multiple -i options (and multiple identities specified in configuration files).  If no cerâАР
             tificates have been explicitly specified by the CertificateFile directive, ssh will also try to load certificate information from the fileâАР
             name obtained by appending -cert.pub to identity filenames.

     -J destination
             Connect to the target host by first making a ssh connection to the jump host described by destination and then establishing a TCP forwardâАР
             ing to the ultimate destination from there.  Multiple jump hops may be specified separated by comma characters.  This is a shortcut to
             specify a ProxyJump configuration directive.  Note that configuration directives supplied on the command-line generally apply to the destiâАР
             nation host and not any specified jump hosts.  Use ~/.ssh/config to specify configuration for jump hosts.

     -K      Enables GSSAPI-based authentication and forwarding (delegation) of GSSAPI credentials to the server.

     -k      Disables forwarding (delegation) of GSSAPI credentials to the server.

     -L [bind_address:]port:host:hostport
     -L [bind_address:]port:remote_socket
     -L local_socket:host:hostport
     -L local_socket:remote_socket
             Specifies that connections to the given TCP port or Unix socket on the local (client) host are to be forwarded to the given host and port,
             or Unix socket, on the remote side.  This works by allocating a socket to listen to either a TCP port on the local side, optionally bound
             to the specified bind_address, or to a Unix socket.  Whenever a connection is made to the local port or socket, the connection is forwarded
             over the secure channel, and a connection is made to either host port hostport, or the Unix socket remote_socket, from the remote machine.

             Port forwardings can also be specified in the configuration file.  Only the superuser can forward privileged ports.  IPv6 addresses can be
             specified by enclosing the address in square brackets.

             By default, the local port is bound in accordance with the GatewayPorts setting.  However, an explicit bind_address may be used to bind the
             connection to a specific address.  The bind_address of âАЬlocalhostâАЭ indicates that the listening port be bound for local use only, while an
             empty address or âА~*âАTM indicates that the port should be available from all interfaces.

     -l login_name
             Specifies the user to log in as on the remote machine.  This also may be specified on a per-host basis in the configuration file.

     -M      Places the ssh client into âАЬmasterâАЭ mode for connection sharing.  Multiple -M options places ssh into âАЬmasterâАЭ mode but with confirmation
             required using ssh-askpass(1) before each operation that changes the multiplexing state (e.g. opening a new session).  Refer to the deâАР
             scription of ControlMaster in ssh_config(5) for details.

     -m mac_spec
             A comma-separated list of MAC (message authentication code) algorithms, specified in order of preference.  See the MACs keyword for more
             information.

     -N      Do not execute a remote command.  This is useful for just forwarding ports.

     -n      Redirects stdin from /dev/null (actually, prevents reading from stdin).  This must be used when ssh is run in the background.  A common
             trick is to use this to run X11 programs on a remote machine.  For example, ssh -n shadows.cs.hut.fi emacs & will start an emacs on shadâАР
             ows.cs.hut.fi, and the X11 connection will be automatically forwarded over an encrypted channel.  The ssh program will be put in the backâАР
             ground.  (This does not work if ssh needs to ask for a password or passphrase; see also the -f option.)

     -O ctl_cmd
             Control an active connection multiplexing master process.  When the -O option is specified, the ctl_cmd argument is interpreted and passed
             to the master process.  Valid commands are: âАЬcheckâАЭ (check that the master process is running), âАЬforwardâАЭ (request forwardings without comâАР
             mand execution), âАЬcancelâАЭ (cancel forwardings), âАЬexitâАЭ (request the master to exit), and âАЬstopâАЭ (request the master to stop accepting furâАР
             ther multiplexing requests).

     -o option
             Can be used to give options in the format used in the configuration file.  This is useful for specifying options for which there is no sepâАР
             arate command-line flag.  For full details of the options listed below, and their possible values, see ssh_config(5).

                   AddKeysToAgent
                   AddressFamily
                   BatchMode
                   BindAddress
                   CanonicalDomains
                   CanonicalizeFallbackLocal
                   CanonicalizeHostname
                   CanonicalizeMaxDots
                   CanonicalizePermittedCNAMEs
                   CASignatureAlgorithms
                   CertificateFile
                   ChallengeResponseAuthentication
                   CheckHostIP
                   Ciphers
                   ClearAllForwardings
                   Compression
                   ConnectionAttempts
                   ConnectTimeout
                   ControlMaster
                   ControlPath
                   ControlPersist
                   DynamicForward
                   EscapeChar
                   ExitOnForwardFailure
                   FingerprintHash
                   ForwardAgent
                   ForwardX11
                   ForwardX11Timeout
                   ForwardX11Trusted
                   GatewayPorts
                   GlobalKnownHostsFile
                   GSSAPIAuthentication
                   GSSAPIKeyExchange
                   GSSAPIClientIdentity
                   GSSAPIDelegateCredentials
                   GSSAPIKexAlgorithms
                   GSSAPIRenewalForcesRekey
                   GSSAPIServerIdentity
                   GSSAPITrustDns
                   HashKnownHosts
                   Host
                   HostbasedAuthentication
                   HostbasedKeyTypes
                   HostKeyAlgorithms
                   HostKeyAlias
                   Hostname
                   IdentitiesOnly
                   IdentityAgent
                   IdentityFile
                   IPQoS
                   KbdInteractiveAuthentication
                   KbdInteractiveDevices
                   KexAlgorithms
                   LocalCommand
                   LocalForward
                   LogLevel
                   MACs
                   Match
                   NoHostAuthenticationForLocalhost
                   NumberOfPasswordPrompts
                   PasswordAuthentication
                   PermitLocalCommand
                   PKCS11Provider
                   Port
                   PreferredAuthentications
                   ProxyCommand
                   ProxyJump
                   ProxyUseFdpass
                   PubkeyAcceptedKeyTypes
                   PubkeyAuthentication
                   RekeyLimit
                   RemoteCommand
                   RemoteForward
                   RequestTTY
                   SendEnv
                   ServerAliveInterval
                   ServerAliveCountMax
                   SetEnv
                   StreamLocalBindMask
                   StreamLocalBindUnlink
                   StrictHostKeyChecking
                   TCPKeepAlive
                   Tunnel
                   TunnelDevice
                   UpdateHostKeys
                   User
                   UserKnownHostsFile
                   VerifyHostKeyDNS
                   VisualHostKey
                   XAuthLocation

     -p port
             Port to connect to on the remote host.  This can be specified on a per-host basis in the configuration file.

     -Q query_option
             Queries ssh for the algorithms supported for the specified version 2.  The available features are: cipher (supported symmetric ciphers),
             cipher-auth (supported symmetric ciphers that support authenticated encryption), help (supported query terms for use with the -Q flag), mac
             (supported message integrity codes), kex (key exchange algorithms), kex-gss (GSSAPI key exchange algorithms), key (key types), key-cert
             (certificate key types), key-plain (non-certificate key types), key-sig (all key types and signature algorithms), protocol-version (supâАР
             ported SSH protocol versions), and sig (supported signature algorithms).  Alternatively, any keyword from ssh_config(5) or sshd_config(5)
             that takes an algorithm list may be used as an alias for the corresponding query_option.

     -q      Quiet mode.  Causes most warning and diagnostic messages to be suppressed.

     -R [bind_address:]port:host:hostport
     -R [bind_address:]port:local_socket
     -R remote_socket:host:hostport
     -R remote_socket:local_socket
     -R [bind_address:]port
             Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the local side.

             This works by allocating a socket to listen to either a TCP port or to a Unix socket on the remote side.  Whenever a connection is made to
             this port or Unix socket, the connection is forwarded over the secure channel, and a connection is made from the local machine to either an
             explicit destination specified by host port hostport, or local_socket, or, if no explicit destination was specified, ssh will act as a
             SOCKS 4/5 proxy and forward connections to the destinations requested by the remote SOCKS client.

             Port forwardings can also be specified in the configuration file.  Privileged ports can be forwarded only when logging in as root on the
             remote machine.  IPv6 addresses can be specified by enclosing the address in square brackets.

             By default, TCP listening sockets on the server will be bound to the loopback interface only.  This may be overridden by specifying a
             bind_address.  An empty bind_address, or the address âА~*âАTM, indicates that the remote socket should listen on all interfaces.  Specifying a
             remote bind_address will only succeed if the server's GatewayPorts option is enabled (see sshd_config(5)).

             If the port argument is âА~0âАTM, the listen port will be dynamically allocated on the server and reported to the client at run time.  When used
             together with -O forward the allocated port will be printed to the standard output.

     -S ctl_path
             Specifies the location of a control socket for connection sharing, or the string âАЬnoneâАЭ to disable connection sharing.  Refer to the deâАР
             scription of ControlPath and ControlMaster in ssh_config(5) for details.

     -s      May be used to request invocation of a subsystem on the remote system.  Subsystems facilitate the use of SSH as a secure transport for
             other applications (e.g. sftp(1)).  The subsystem is specified as the remote command.

     -T      Disable pseudo-terminal allocation.

     -t      Force pseudo-terminal allocation.  This can be used to execute arbitrary screen-based programs on a remote machine, which can be very useâАР
             ful, e.g. when implementing menu services.  Multiple -t options force tty allocation, even if ssh has no local tty.

     -V      Display the version number and exit.

     -v      Verbose mode.  Causes ssh to print debugging messages about its progress.  This is helpful in debugging connection, authentication, and
             configuration problems.  Multiple -v options increase the verbosity.  The maximum is 3.

     -W host:port
             Requests that standard input and output on the client be forwarded to host on port over the secure channel.  Implies -N, -T,
             ExitOnForwardFailure and ClearAllForwardings, though these can be overridden in the configuration file or using -o command line options.

     -w local_tun[:remote_tun]
             Requests tunnel device forwarding with the specified tun(4) devices between the client (local_tun) and the server (remote_tun).

             The devices may be specified by numerical ID or the keyword âАЬanyâАЭ, which uses the next available tunnel device.  If remote_tun is not specâАР
             ified, it defaults to âАЬanyâАЭ.  See also the Tunnel and TunnelDevice directives in ssh_config(5).

             If the Tunnel directive is unset, it will be set to the default tunnel mode, which is âАЬpoint-to-pointâАЭ.  If a different Tunnel forwarding
             mode it desired, then it should be specified before -w.

     -X      Enables X11 forwarding.  This can also be specified on a per-host basis in a configuration file.

             X11 forwarding should be enabled with caution.  Users with the ability to bypass file permissions on the remote host (for the user's X auâАР
             thorization database) can access the local X11 display through the forwarded connection.  An attacker may then be able to perform activiâАР
             ties such as keystroke monitoring.

             For this reason, X11 forwarding is subjected to X11 SECURITY extension restrictions by default.  Please refer to the ssh -Y option and the
             ForwardX11Trusted directive in ssh_config(5) for more information.

             (Debian-specific: X11 forwarding is not subjected to X11 SECURITY extension restrictions by default, because too many programs currently
             crash in this mode.  Set the ForwardX11Trusted option to âАЬnoâАЭ to restore the upstream behaviour.  This may change in future depending on
             client-side improvements.)

     -x      Disables X11 forwarding.

     -Y      Enables trusted X11 forwarding.  Trusted X11 forwardings are not subjected to the X11 SECURITY extension controls.

             (Debian-specific: In the default configuration, this option is equivalent to -X, since ForwardX11Trusted defaults to âАЬyesâАЭ as described
             above.  Set the ForwardX11Trusted option to âАЬnoâАЭ to restore the upstream behaviour.  This may change in future depending on client-side imâАР
             provements.)

     -y      Send log information using the syslog(3) system module.  By default this information is sent to stderr.

     ssh may additionally obtain configuration data from a per-user configuration file and a system-wide configuration file.  The file format and conâАР
     figuration options are described in ssh_config(5).

AUTHENTICATION

The OpenSSH SSH client supports SSH protocol 2.

     The methods available for authentication are: GSSAPI-based authentication, host-based authentication, public key authentication, challenge-response
     authentication, and password authentication.  Authentication methods are tried in the order specified above, though PreferredAuthentications can be
     used to change the default order.

     Host-based authentication works as follows: If the machine the user logs in from is listed in /etc/hosts.equiv or /etc/ssh/shosts.equiv on the reâАР
     mote machine, the user is non-root and the user names are the same on both sides, or if the files ~/.rhosts or ~/.shosts exist in the user's home
     directory on the remote machine and contain a line containing the name of the client machine and the name of the user on that machine, the user is
     considered for login.  Additionally, the server must be able to verify the client's host key (see the description of /etc/ssh/ssh_known_hosts and
     ~/.ssh/known_hosts, below) for login to be permitted.  This authentication method closes security holes due to IP spoofing, DNS spoofing, and routâАР
     ing spoofing.  [Note to the administrator: /etc/hosts.equiv, ~/.rhosts, and the rlogin/rsh protocol in general, are inherently insecure and should
     be disabled if security is desired.]

     Public key authentication works as follows: The scheme is based on public-key cryptography, using cryptosystems where encryption and decryption are
     done using separate keys, and it is unfeasible to derive the decryption key from the encryption key.  The idea is that each user creates a pubâАР
     lic/private key pair for authentication purposes.  The server knows the public key, and only the user knows the private key.  ssh implements public
     key authentication protocol automatically, using one of the DSA, ECDSA, Ed25519 or RSA algorithms.  The HISTORY section of ssl(8) (on non-OpenBSD
     systems, see http://www.openbsd.org/cgi-bin/man.cgi?query=ssl&sektion=8#HISTORY) contains a brief discussion of the DSA and RSA algorithms.

     The file ~/.ssh/authorized_keys lists the public keys that are permitted for logging in.  When the user logs in, the ssh program tells the server
     which key pair it would like to use for authentication.  The client proves that it has access to the private key and the server checks that the
     corresponding public key is authorized to accept the account.

     The server may inform the client of errors that prevented public key authentication from succeeding after authentication completes using a differâАР
     ent method.  These may be viewed by increasing the LogLevel to DEBUG or higher (e.g. by using the -v flag).

     The user creates his/her key pair by running ssh-keygen(1).  This stores the private key in ~/.ssh/id_dsa (DSA), ~/.ssh/id_ecdsa (ECDSA),
     ~/.ssh/id_ecdsa_sk (authenticator-hosted ECDSA), ~/.ssh/id_ed25519 (Ed25519), ~/.ssh/id_ed25519_sk (authenticator-hosted Ed25519), or ~/.ssh/id_rsa
     (RSA) and stores the public key in ~/.ssh/id_dsa.pub (DSA), ~/.ssh/id_ecdsa.pub (ECDSA), ~/.ssh/id_ecdsa_sk.pub (authenticator-hosted ECDSA),
     ~/.ssh/id_ed25519.pub (Ed25519), ~/.ssh/id_ed25519_sk.pub (authenticator-hosted Ed25519), or ~/.ssh/id_rsa.pub (RSA) in the user's home directory.
     The user should then copy the public key to ~/.ssh/authorized_keys in his/her home directory on the remote machine.  The authorized_keys file corâАР
     responds to the conventional ~/.rhosts file, and has one key per line, though the lines can be very long.  After this, the user can log in without
     giving the password.

     A variation on public key authentication is available in the form of certificate authentication: instead of a set of public/private keys, signed
     certificates are used.  This has the advantage that a single trusted certification authority can be used in place of many public/private keys.  See
     the CERTIFICATES section of ssh-keygen(1) for more information.

     The most convenient way to use public key or certificate authentication may be with an authentication agent.  See ssh-agent(1) and (optionally) the
     AddKeysToAgent directive in ssh_config(5) for more information.

     Challenge-response authentication works as follows: The server sends an arbitrary "challenge" text, and prompts for a response.  Examples of chalâАР
     lenge-response authentication include BSD Authentication (see login.conf(5)) and PAM (some non-OpenBSD systems).

     Finally, if other authentication methods fail, ssh prompts the user for a password.  The password is sent to the remote host for checking; however,
     since all communications are encrypted, the password cannot be seen by someone listening on the network.

     ssh automatically maintains and checks a database containing identification for all hosts it has ever been used with.  Host keys are stored in
     ~/.ssh/known_hosts in the user's home directory.  Additionally, the file /etc/ssh/ssh_known_hosts is automatically checked for known hosts.  Any
     new hosts are automatically added to the user's file.  If a host's identification ever changes, ssh warns about this and disables password authenâАР
     tication to prevent server spoofing or man-in-the-middle attacks, which could otherwise be used to circumvent the encryption.  The
     StrictHostKeyChecking option can be used to control logins to machines whose host key is not known or has changed.

     When the user's identity has been accepted by the server, the server either executes the given command in a non-interactive session or, if no comâАР
     mand has been specified, logs into the machine and gives the user a normal shell as an interactive session.  All communication with the remote comâАР
     mand or shell will be automatically encrypted.

     If an interactive session is requested ssh by default will only request a pseudo-terminal (pty) for interactive sessions when the client has one.
     The flags -T and -t can be used to override this behaviour.

     If a pseudo-terminal has been allocated the user may use the escape characters noted below.

     If no pseudo-terminal has been allocated, the session is transparent and can be used to reliably transfer binary data.  On most systems, setting
     the escape character to âАЬnoneâАЭ will also make the session transparent even if a tty is used.

     The session terminates when the command or shell on the remote machine exits and all X11 and TCP connections have been closed.

ESCAPE CHARACTERS

When a pseudo-terminal has been requested, ssh supports a number of functions through the use of an escape character.

     A single tilde character can be sent as ~~ or by following the tilde by a character other than those described below.  The escape character must
     always follow a newline to be interpreted as special.  The escape character can be changed in configuration files using the EscapeChar configuraâАР
     tion directive or on the command line by the -e option.

     The supported escapes (assuming the default âА~~âАTM) are:

     ~.      Disconnect.

     ~^Z     Background ssh.

     ~#      List forwarded connections.

     ~&      Background ssh at logout when waiting for forwarded connection / X11 sessions to terminate.

     ~?      Display a list of escape characters.

     ~B      Send a BREAK to the remote system (only useful if the peer supports it).

     ~C      Open command line.  Currently this allows the addition of port forwardings using the -L, -R and -D options (see above).  It also allows the
             cancellation of existing port-forwardings with -KL[bind_address:]port for local, -KR[bind_address:]port for remote and
             -KD[bind_address:]port for dynamic port-forwardings.  !command allows the user to execute a local command if the PermitLocalCommand option
             is enabled in ssh_config(5).  Basic help is available, using the -h option.

     ~R      Request rekeying of the connection (only useful if the peer supports it).

     ~V      Decrease the verbosity (LogLevel) when errors are being written to stderr.

     ~v      Increase the verbosity (LogLevel) when errors are being written to stderr.

TCP FORWARDING

     Forwarding of arbitrary TCP connections over a secure channel can be specified either on the command line or in a configuration file.  One possible
     application of TCP forwarding is a secure connection to a mail server; another is going through firewalls.

     In the example below, we look at encrypting communication for an IRC client, even though the IRC server it connects to does not directly support
     encrypted communication.  This works as follows: the user connects to the remote host using ssh, specifying the ports to be used to forward the
     connection.  After that it is possible to start the program locally, and ssh will encrypt and forward the connection to the remote server.

     The following example tunnels an IRC session from the client to an IRC server at âАЬserver.example.comâАЭ, joining channel âАЬ#usersâАЭ, nickname âАЬpinkyâАЭ,
     using the standard IRC port, 6667:

         $ ssh -f -L 6667:localhost:6667 server.example.com sleep 10
         $ irc -c '#users' pinky IRC/127.0.0.1

     The -f option backgrounds ssh and the remote command âАЬsleep 10âАЭ is specified to allow an amount of time (10 seconds, in the example) to start the
     program which is going to use the tunnel.  If no connections are made within the time specified, ssh will exit.

X11 FORWARDING

     If the ForwardX11 variable is set to âАЬyesâАЭ (or see the description of the -X, -x, and -Y options above) and the user is using X11 (the DISPLAY enâАР
     vironment variable is set), the connection to the X11 display is automatically forwarded to the remote side in such a way that any X11 programs
     started from the shell (or command) will go through the encrypted channel, and the connection to the real X server will be made from the local maâАР
     chine.  The user should not manually set DISPLAY.  Forwarding of X11 connections can be configured on the command line or in configuration files.

     The DISPLAY value set by ssh will point to the server machine, but with a display number greater than zero.  This is normal, and happens because
     ssh creates a âАЬproxyâАЭ X server on the server machine for forwarding the connections over the encrypted channel.

     ssh will also automatically set up Xauthority data on the server machine.  For this purpose, it will generate a random authorization cookie, store
     it in Xauthority on the server, and verify that any forwarded connections carry this cookie and replace it by the real cookie when the connection
     is opened.  The real authentication cookie is never sent to the server machine (and no cookies are sent in the plain).

     If the ForwardAgent variable is set to âАЬyesâАЭ (or see the description of the -A and -a options above) and the user is using an authentication agent,
     the connection to the agent is automatically forwarded to the remote side.

VERIFYING HOST KEYS

     When connecting to a server for the first time, a fingerprint of the server's public key is presented to the user (unless the option
     StrictHostKeyChecking has been disabled).  Fingerprints can be determined using ssh-keygen(1):

           $ ssh-keygen -l -f /etc/ssh/ssh_host_rsa_key

     If the fingerprint is already known, it can be matched and the key can be accepted or rejected.  If only legacy (MD5) fingerprints for the server
     are available, the ssh-keygen(1) -E option may be used to downgrade the fingerprint algorithm to match.

     Because of the difficulty of comparing host keys just by looking at fingerprint strings, there is also support to compare host keys visually, using
     random art.  By setting the VisualHostKey option to âАЬyesâАЭ, a small ASCII graphic gets displayed on every login to a server, no matter if the sesâАР
     sion itself is interactive or not.  By learning the pattern a known server produces, a user can easily find out that the host key has changed when
     a completely different pattern is displayed.  Because these patterns are not unambiguous however, a pattern that looks similar to the pattern reâАР
     membered only gives a good probability that the host key is the same, not guaranteed proof.

     To get a listing of the fingerprints along with their random art for all known hosts, the following command line can be used:

           $ ssh-keygen -lv -f ~/.ssh/known_hosts

     If the fingerprint is unknown, an alternative method of verification is available: SSH fingerprints verified by DNS.  An additional resource record
     (RR), SSHFP, is added to a zonefile and the connecting client is able to match the fingerprint with that of the key presented.

     In this example, we are connecting a client to a server, âАЬhost.example.comâАЭ.  The SSHFP resource records should first be added to the zonefile for
     host.example.com:

           $ ssh-keygen -r host.example.com.

     The output lines will have to be added to the zonefile.  To check that the zone is answering fingerprint queries:

           $ dig -t SSHFP host.example.com

     Finally the client connects:

           $ ssh -o "VerifyHostKeyDNS ask" host.example.com
           [...]
           Matching host key fingerprint found in DNS.
           Are you sure you want to continue connecting (yes/no)?

     See the VerifyHostKeyDNS option in ssh_config(5) for more information.

SSH-BASED VIRTUAL PRIVATE NETWORKS

     ssh contains support for Virtual Private Network (VPN) tunnelling using the tun(4) network pseudo-device, allowing two networks to be joined seâАР
     curely.  The sshd_config(5) configuration option PermitTunnel controls whether the server supports this, and at what level (layer 2 or 3 traffic).

     The following example would connect client network 10.0.50.0/24 with remote network 10.0.99.0/24 using a point-to-point connection from 10.1.1.1 to
     10.1.1.2, provided that the SSH server running on the gateway to the remote network, at 192.168.1.15, allows it.

     On the client:

           # ssh -f -w 0:1 192.168.1.15 true
           # ifconfig tun0 10.1.1.1 10.1.1.2 netmask 255.255.255.252
           # route add 10.0.99.0/24 10.1.1.2

     On the server:

           # ifconfig tun1 10.1.1.2 10.1.1.1 netmask 255.255.255.252
           # route add 10.0.50.0/24 10.1.1.1

     Client access may be more finely tuned via the /root/.ssh/authorized_keys file (see below) and the PermitRootLogin server option.  The following
     entry would permit connections on tun(4) device 1 from user âАЬjaneâАЭ and on tun device 2 from user âАЬjohnâАЭ, if PermitRootLogin is set to
     âАЬforced-commands-onlyâАЭ:

       tunnel="1",command="sh /etc/netstart tun1" ssh-rsa ... jane
       tunnel="2",command="sh /etc/netstart tun2" ssh-rsa ... john

     Since an SSH-based setup entails a fair amount of overhead, it may be more suited to temporary setups, such as for wireless VPNs.  More permanent
     VPNs are better provided by tools such as ipsecctl(8) and isakmpd(8).

ENVIRONMENT

ssh will normally set the following environment variables:

     DISPLAY               The DISPLAY variable indicates the location of the X11 server.  It is automatically set by ssh to point to a value of the
                           form âАЬhostname:nâАЭ, where âАЬhostnameâАЭ indicates the host where the shell runs, and âА~nâАTM is an integer âЙ¥ 1.  ssh uses this speâАР
                           cial value to forward X11 connections over the secure channel.  The user should normally not set DISPLAY explicitly, as that
                           will render the X11 connection insecure (and will require the user to manually copy any required authorization cookies).

     HOME                  Set to the path of the user's home directory.

     LOGNAME               Synonym for USER; set for compatibility with systems that use this variable.

     MAIL                  Set to the path of the user's mailbox.

     PATH                  Set to the default PATH, as specified when compiling ssh.

     SSH_ASKPASS           If ssh needs a passphrase, it will read the passphrase from the current terminal if it was run from a terminal.  If ssh does
                           not have a terminal associated with it but DISPLAY and SSH_ASKPASS are set, it will execute the program specified by
                           SSH_ASKPASS and open an X11 window to read the passphrase.  This is particularly useful when calling ssh from a .xsession or
                           related script.  (Note that on some machines it may be necessary to redirect the input from /dev/null to make this work.)

     SSH_ASKPASS_REQUIRE   Allows further control over the use of an askpass program.  If this variable is set to âАЬneverâАЭ then ssh will never attempt to
                           use one.  If it is set to âАЬpreferâАЭ, then ssh will prefer to use the askpass program instead of the TTY when requesting passâАР
                           words.  Finally, if the variable is set to âАЬforceâАЭ, then the askpass program will be used for all passphrase input regardless
                           of whether DISPLAY is set.

     SSH_AUTH_SOCK         Identifies the path of a UNIX-domain socket used to communicate with the agent.

     SSH_CONNECTION        Identifies the client and server ends of the connection.  The variable contains four space-separated values: client IP adâАР
                           dress, client port number, server IP address, and server port number.

     SSH_ORIGINAL_COMMAND  This variable contains the original command line if a forced command is executed.  It can be used to extract the original arâАР
                           guments.

     SSH_TTY               This is set to the name of the tty (path to the device) associated with the current shell or command.  If the current session
                           has no tty, this variable is not set.

     SSH_TUNNEL            Optionally set by sshd(8) to contain the interface names assigned if tunnel forwarding was requested by the client.

     SSH_USER_AUTH         Optionally set by sshd(8), this variable may contain a pathname to a file that lists the authentication methods successfully
                           used when the session was established, including any public keys that were used.

     TZ                    This variable is set to indicate the present time zone if it was set when the daemon was started (i.e. the daemon passes the
                           value on to new connections).

     USER                  Set to the name of the user logging in.

     Additionally, ssh reads ~/.ssh/environment, and adds lines of the format âАЬVARNAME=valueâАЭ to the environment if the file exists and users are alâАР
     lowed to change their environment.  For more information, see the PermitUserEnvironment option in sshd_config(5).

FILES

     ~/.rhosts
             This file is used for host-based authentication (see above).  On some machines this file may need to be world-readable if the user's home
             directory is on an NFS partition, because sshd(8) reads it as root.  Additionally, this file must be owned by the user, and must not have
             write permissions for anyone else.  The recommended permission for most machines is read/write for the user, and not accessible by others.

     ~/.shosts
             This file is used in exactly the same way as .rhosts, but allows host-based authentication without permitting login with rlogin/rsh.

     ~/.ssh/
             This directory is the default location for all user-specific configuration and authentication information.  There is no general requirement
             to keep the entire contents of this directory secret, but the recommended permissions are read/write/execute for the user, and not accessiâАР
             ble by others.

     ~/.ssh/authorized_keys
             Lists the public keys (DSA, ECDSA, Ed25519, RSA) that can be used for logging in as this user.  The format of this file is described in the
             sshd(8) manual page.  This file is not highly sensitive, but the recommended permissions are read/write for the user, and not accessible by
             others.

     ~/.ssh/config
             This is the per-user configuration file.  The file format and configuration options are described in ssh_config(5).  Because of the potenâАР
             tial for abuse, this file must have strict permissions: read/write for the user, and not writable by others.  It may be group-writable proâАР
             vided that the group in question contains only the user.

     ~/.ssh/environment
             Contains additional definitions for environment variables; see ENVIRONMENT, above.

     ~/.ssh/id_dsa
     ~/.ssh/id_ecdsa
     ~/.ssh/id_ecdsa_sk
     ~/.ssh/id_ed25519
     ~/.ssh/id_ed25519_sk
     ~/.ssh/id_rsa
             Contains the private key for authentication.  These files contain sensitive data and should be readable by the user but not accessible by
             others (read/write/execute).  ssh will simply ignore a private key file if it is accessible by others.  It is possible to specify a
             passphrase when generating the key which will be used to encrypt the sensitive part of this file using AES-128.

     ~/.ssh/id_dsa.pub
     ~/.ssh/id_ecdsa.pub
     ~/.ssh/id_ecdsa_sk.pub
     ~/.ssh/id_ed25519.pub
     ~/.ssh/id_ed25519_sk.pub
     ~/.ssh/id_rsa.pub
             Contains the public key for authentication.  These files are not sensitive and can (but need not) be readable by anyone.

     ~/.ssh/known_hosts
             Contains a list of host keys for all hosts the user has logged into that are not already in the systemwide list of known host keys.  See
             sshd(8) for further details of the format of this file.

     ~/.ssh/rc
             Commands in this file are executed by ssh when the user logs in, just before the user's shell (or command) is started.  See the sshd(8)
             manual page for more information.

     /etc/hosts.equiv
             This file is for host-based authentication (see above).  It should only be writable by root.

     /etc/ssh/shosts.equiv
             This file is used in exactly the same way as hosts.equiv, but allows host-based authentication without permitting login with rlogin/rsh.

     /etc/ssh/ssh_config
             Systemwide configuration file.  The file format and configuration options are described in ssh_config(5).

     /etc/ssh/ssh_host_key
     /etc/ssh/ssh_host_dsa_key
     /etc/ssh/ssh_host_ecdsa_key
     /etc/ssh/ssh_host_ed25519_key
     /etc/ssh/ssh_host_rsa_key
             These files contain the private parts of the host keys and are used for host-based authentication.

     /etc/ssh/ssh_known_hosts
             Systemwide list of known host keys.  This file should be prepared by the system administrator to contain the public host keys of all maâАР
             chines in the organization.  It should be world-readable.  See sshd(8) for further details of the format of this file.

     /etc/ssh/sshrc
             Commands in this file are executed by ssh when the user logs in, just before the user's shell (or command) is started.  See the sshd(8)
             manual page for more information.

EXIT STATUS

ssh exits with the exit status of the remote command or with 255 if an error occurred.

SEE ALSO

scp(1), sftp(1), ssh-add(1), ssh-agent(1), ssh-argv0(1), ssh-keygen(1), ssh-keyscan(1), tun(4), ssh_config(5), ssh-keysign(8), sshd(8)

STANDARDS

S. Lehtinen and C. Lonvick, The Secure Shell (SSH) Protocol Assigned Numbers, RFC 4250, January 2006.

T. Ylonen and C. Lonvick, The Secure Shell (SSH) Protocol Architecture, RFC 4251, January 2006.

T. Ylonen and C. Lonvick, The Secure Shell (SSH) Authentication Protocol, RFC 4252, January 2006.

T. Ylonen and C. Lonvick, The Secure Shell (SSH) Transport Layer Protocol, RFC 4253, January 2006.

T. Ylonen and C. Lonvick, The Secure Shell (SSH) Connection Protocol, RFC 4254, January 2006.

J. Schlyter and W. Griffin, Using DNS to Securely Publish Secure Shell (SSH) Key Fingerprints, RFC 4255, January 2006.

F. Cusack and M. Forssen, Generic Message Exchange Authentication for the Secure Shell Protocol (SSH), RFC 4256, January 2006.

J. Galbraith and P. Remaker, The Secure Shell (SSH) Session Channel Break Extension, RFC 4335, January 2006.

M. Bellare, T. Kohno, and C. Namprempre, The Secure Shell (SSH) Transport Layer Encryption Modes, RFC 4344, January 2006.

B. Harris, Improved Arcfour Modes for the Secure Shell (SSH) Transport Layer Protocol, RFC 4345, January 2006.

M. Friedl, N. Provos, and W. Simpson, Diffie-Hellman Group Exchange for the Secure Shell (SSH) Transport Layer Protocol, RFC 4419, March 2006.

J. Galbraith and R. Thayer, The Secure Shell (SSH) Public Key File Format, RFC 4716, November 2006.

D. Stebila and J. Green, Elliptic Curve Algorithm Integration in the Secure Shell Transport Layer, RFC 5656, December 2009.

  1. Perrig and D. Song, Hash Visualization: a New Technique to improve Real-World Security, 1999, International Workshop on Cryptographic Techniques and E-Commerce (CrypTEC ’99).

AUTHORS

     OpenSSH is a derivative of the original and free ssh 1.2.12 release by Tatu Ylonen.  Aaron Campbell, Bob Beck, Markus Friedl, Niels Provos, Theo de
     Raadt and Dug Song removed many bugs, re-added newer features and created OpenSSH.  Markus Friedl contributed the support for SSH protocol versions
     1.5 and 2.0.

BSD                                                                   July 15, 2020                                                                  BSD
Categories
Linux manpage

manpage sort

SORT(1) User Commands SORT(1)

NAME

sort – sort lines of text files

SYNOPSIS

       sort [OPTION]... [FILE]...
       sort [OPTION]... --files0-from=F

DESCRIPTION

Write sorted concatenation of all FILE(s) to standard output.

With no FILE, or when FILE is -, read standard input.

Mandatory arguments to long options are mandatory for short options too. Ordering options:

       -b, --ignore-leading-blanks
              ignore leading blanks

       -d, --dictionary-order
              consider only blanks and alphanumeric characters

       -f, --ignore-case
              fold lower case to upper case characters

       -g, --general-numeric-sort
              compare according to general numerical value

       -i, --ignore-nonprinting
              consider only printable characters

       -M, --month-sort
              compare (unknown) < 'JAN' < ... < 'DEC'

       -h, --human-numeric-sort
              compare human readable numbers (e.g., 2K 1G)

       -n, --numeric-sort
              compare according to string numerical value

       -R, --random-sort
              shuffle, but group identical keys.  See shuf(1)

       --random-source=FILE
              get random bytes from FILE

       -r, --reverse
              reverse the result of comparisons

       --sort=WORD
              sort according to WORD: general-numeric -g, human-numeric -h, month -M, numeric -n, random -R, version -V

       -V, --version-sort
              natural sort of (version) numbers within text

       Other options:

       --batch-size=NMERGE
              merge at most NMERGE inputs at once; for more use temp files

       -c, --check, --check=diagnose-first
              check for sorted input; do not sort

       -C, --check=quiet, --check=silent
              like -c, but do not report first bad line

       --compress-program=PROG
              compress temporaries with PROG; decompress them with PROG -d

       --debug
              annotate the part of the line used to sort, and warn about questionable usage to stderr

       --files0-from=F
              read input from the files specified by NUL-terminated names in file F; If F is - then read names from standard input

       -k, --key=KEYDEF
              sort via a key; KEYDEF gives location and type

       -m, --merge
              merge already sorted files; do not sort

       -o, --output=FILE
              write result to FILE instead of standard output

       -s, --stable
              stabilize sort by disabling last-resort comparison

       -S, --buffer-size=SIZE
              use SIZE for main memory buffer

       -t, --field-separator=SEP
              use SEP instead of non-blank to blank transition

       -T, --temporary-directory=DIR
              use DIR for temporaries, not $TMPDIR or /tmp; multiple options specify multiple directories

       --parallel=N
              change the number of sorts run concurrently to N

       -u, --unique
              with -c, check for strict ordering; without -c, output only the first of an equal run

       -z, --zero-terminated
              line delimiter is NUL, not newline

       --help display this help and exit

       --version
              output version information and exit

       KEYDEF  is F[.C][OPTS][,F[.C][OPTS]] for start and stop position, where F is a field number and C a character position in the field; both are origin 1, and the stop position defaults to the line's end.  If neither -t nor -b is in effect, characters in a field are counted from the beginning of the preâАР
       ceding whitespace.  OPTS is one or more single-letter ordering options [bdfgiMhnRrV], which override global ordering options for that key.  If no key is given, use the entire line as the key.  Use --debug to diagnose incorrect key usage.

       SIZE may be followed by the following multiplicative suffixes: % 1% of memory, b 1, K 1024 (default), and so on for M, G, T, P, E, Z, Y.

       *** WARNING *** The locale specified by the environment affects sort order.  Set LC_ALL=C to get the traditional sort order that uses native byte values.

AUTHOR

Written by Mike Haertel and Paul Eggert.

REPORTING BUGS

       GNU coreutils online help: <https://www.gnu.org/software/coreutils/>
       Report any translation bugs to <https://translationproject.org/team/>

COPYRIGHT

       Copyright © 2020 Free Software Foundation, Inc.  License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
       This is free software: you are free to change and redistribute it.  There is NO WARRANTY, to the extent permitted by law.

SEE ALSO

shuf(1), uniq(1)

       Full documentation <https://www.gnu.org/software/coreutils/sort>
       or available locally via: info '(coreutils) sort invocation'

GNU coreutils 8.32                                                                                                                                  September 2020                                                                                                                                            SORT(1)
Categories
Linux manpage

manpage sed

 

SED(1) User Commands SED(1)

NAME

sed – stream editor for filtering and transforming text

SYNOPSIS

sed [OPTION]… {script-only-if-no-other-script} [input-file]…

DESCRIPTION

       Sed is a stream editor.  A stream editor is used to perform basic text transformations on an input stream (a file or input from a pipeline).  While in some ways similar to an editor which permits scripted edits (such as ed), sed works by making only one pass over the input(s), and is consequently more
       efficient.  But it is sed's ability to filter text in a pipeline which particularly distinguishes it from other types of editors.

       -n, --quiet, --silent

              suppress automatic printing of pattern space

       --debug

              annotate program execution

       -e script, --expression=script

              add the script to the commands to be executed

       -f script-file, --file=script-file

              add the contents of script-file to the commands to be executed

       --follow-symlinks

              follow symlinks when processing in place

       -i[SUFFIX], --in-place[=SUFFIX]

              edit files in place (makes backup if SUFFIX supplied)

       -l N, --line-length=N

              specify the desired line-wrap length for the `l' command

       --posix

              disable all GNU extensions.

       -E, -r, --regexp-extended

              use extended regular expressions in the script (for portability use POSIX -E).

       -s, --separate

              consider files as separate rather than as a single, continuous long stream.

       --sandbox

              operate in sandbox mode (disable e/r/w commands).

       -u, --unbuffered

              load minimal amounts of data from the input files and flush the output buffers more often

       -z, --null-data

              separate lines by NUL characters

       --help
              display this help and exit

       --version
              output version information and exit

       If no -e, --expression, -f, or --file option is given, then the first non-option argument is taken as the sed script to interpret.  All remaining arguments are names of input files; if no input files are specified, then the standard input is read.

       GNU sed home page: <https://www.gnu.org/software/sed/>.  General help using GNU software: <https://www.gnu.org/gethelp/>.  E-mail bug reports to: <bug-sed@gnu.org>.

COMMAND SYNOPSIS

This is just a brief synopsis of sed commands to serve as a reminder to those who already know sed; other documentation (such as the texinfo document) must be consulted for fuller descriptions.

Zero-address “commands”

       : label
              Label for b and t commands.

       #comment
              The comment extends until the next newline (or the end of a -e script fragment).

       }      The closing bracket of a { } block.

Zero- or One- address commands

  • Print the current line number.

a \

text Append text, which has each embedded newline preceded by a backslash.

i \

text Insert text, which has each embedded newline preceded by a backslash.

       q [exit-code]
              Immediately quit the sed script without processing any more input, except that if auto-print is not disabled the current pattern space will be printed.  The exit code argument is a GNU extension.

       Q [exit-code]
              Immediately quit the sed script without processing any more input.  This is a GNU extension.

       r filename
              Append text read from filename.

       R filename
              Append a line read from filename.  Each invocation of the command reads a line from the file.  This is a GNU extension.

Commands which accept address ranges

{ Begin a block of commands (end with a }).

       b label
              Branch to label; if label is omitted, branch to end of script.

       c \

       text   Replace the selected lines with text, which has each embedded newline preceded by a backslash.

       d      Delete pattern space.  Start next cycle.

       D      If pattern space contains no newline, start a normal new cycle as if the d command was issued.  Otherwise, delete text in the pattern space up to the first newline, and restart cycle with the resultant pattern space, without reading a new line of input.

       h H    Copy/append pattern space to hold space.

       g G    Copy/append hold space to pattern space.

       l      List out the current line in a ``visually unambiguous'' form.

       l width
              List out the current line in a ``visually unambiguous'' form, breaking it at width characters.  This is a GNU extension.

       n N    Read/append the next line of input into the pattern space.

       p      Print the current pattern space.

       P      Print up to the first embedded newline of the current pattern space.

       s/regexp/replacement/
              Attempt to match regexp against the pattern space.  If successful, replace that portion matched with replacement.  The replacement may contain the special character & to refer to that portion of the pattern space which matched, and the special escapes \1 through \9 to refer to the corresponding
              matching sub-expressions in the regexp.

       t label
              If a s/// has done a successful substitution since the last input line was read and since the last t or T command, then branch to label; if label is omitted, branch to end of script.

       T label
              If no s/// has done a successful substitution since the last input line was read and since the last t or T command, then branch to label; if label is omitted, branch to end of script.  This is a GNU extension.

       w filename
              Write the current pattern space to filename.

       W filename
              Write the first line of the current pattern space to filename.  This is a GNU extension.

       x      Exchange the contents of the hold and pattern spaces.

       y/source/dest/
              Transliterate the characters in the pattern space which appear in source to the corresponding character in dest.

Addresses

       Sed commands can be given with no addresses, in which case the command will be executed for all input lines; with one address, in which case the command will only be executed for input lines which match that address; or with two addresses, in which case the command will be executed for all input lines
       which match the inclusive range of lines starting from the first address and continuing to the second address.  Three things to note about address ranges: the syntax is addr1,addr2 (i.e., the addresses are separated by a comma); the line which addr1 matched will always be accepted, even if  addr2  seâАР
       lects an earlier line; and if addr2 is a regexp, it will not be tested against the line that addr1 matched.

       After the address (or address-range), and before the command, a !  may be inserted, which specifies that the command shall only be executed if the address (or address-range) does not match.

       The following address types are supported:

       number Match only the specified line number (which increments cumulatively across files, unless the -s option is specified on the command line).

       first~step
              Match  every  step'th  line  starting  with line first.  For example, ``sed -n 1~2p'' will print all the odd-numbered lines in the input stream, and the address 2~5 will match every fifth line, starting with the second.  first can be zero; in this case, sed operates as if it were equal to step.
              (This is an extension.)

       $      Match the last line.

       /regexp/
              Match lines matching the regular expression regexp.  Matching is performed on the current pattern space, which can be modified with commands such as ``s///''.

       \cregexpc
              Match lines matching the regular expression regexp.  The c may be any character.

       GNU sed also supports some special 2-address forms:

       0,addr2
              Start out in "matched first address" state, until addr2 is found.  This is similar to 1,addr2, except that if addr2 matches the very first line of input the 0,addr2 form will be at the end of its range, whereas the 1,addr2 form will still be at the beginning of its range.  This works only  when
              addr2 is a regular expression.

       addr1,+N
              Will match addr1 and the N lines following addr1.

       addr1,~N
              Will match addr1 and the lines following addr1 until the next line whose input line number is a multiple of N.

REGULAR EXPRESSIONS

       POSIX.2  BREs  should be supported, but they aren't completely because of performance problems.  The \n sequence in a regular expression matches the newline character, and similarly for \a, \t, and other sequences.  The -E option switches to using extended regular expressions instead; it has been supâАР
       ported for years by GNU sed, and is now included in POSIX.

BUGS

E-mail bug reports to bug-sed@gnu.org. Also, please include the output of “sed –version” in the body of your report if at all possible.

AUTHOR

Written by Jay Fenlason, Tom Lord, Ken Pizzini, Paolo Bonzini, Jim Meyering, and Assaf Gordon. GNU sed home page: <https://www.gnu.org/software/sed/>. General help using GNU software: <https://www.gnu.org/gethelp/>. E-mail bug reports to: <bug-sed@gnu.org>.

COPYRIGHT

       Copyright © 2018 Free Software Foundation, Inc.  License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
       This is free software: you are free to change and redistribute it.  There is NO WARRANTY, to the extent permitted by law.

SEE ALSO

awk(1), ed(1), grep(1), tr(1), perlre(1), sed.info, any of various books on sed, the sed FAQ (http://sed.sf.net/grabbag/tutorials/sedfaq.txt), http://sed.sf.net/grabbag/.

The full documentation for sed is maintained as a Texinfo manual. If the info and sed programs are properly installed at your site, the command

info sed

should give you access to the complete manual.

sed 4.7 December 2018 SED(1)

Categories
Linux manpage

manpage tar

TAR(1) GNU TAR Manual TAR(1)

NAME

tar – an archiving utility

SYNOPSIS

Traditional usage

tar {A|c|d|r|t|u|x}[GnSkUWOmpsMBiajJzZhPlRvwo] [ARG…]

UNIX-style usage

tar -A [OPTIONS] ARCHIVE ARCHIVE

tar -c [-f ARCHIVE] [OPTIONS] [FILE…]

tar -d [-f ARCHIVE] [OPTIONS] [FILE…]

tar -t [-f ARCHIVE] [OPTIONS] [MEMBER…]

tar -r [-f ARCHIVE] [OPTIONS] [FILE…]

tar -u [-f ARCHIVE] [OPTIONS] [FILE…]

tar -x [-f ARCHIVE] [OPTIONS] [MEMBER…]

GNU-style usage

tar {–catenate|–concatenate} [OPTIONS] ARCHIVE ARCHIVE

tar –create [–file ARCHIVE] [OPTIONS] [FILE…]

tar {–diff|–compare} [–file ARCHIVE] [OPTIONS] [FILE…]

tar –delete [–file ARCHIVE] [OPTIONS] [MEMBER…]

tar –append [-f ARCHIVE] [OPTIONS] [FILE…]

tar –list [-f ARCHIVE] [OPTIONS] [MEMBER…]

tar –test-label [–file ARCHIVE] [OPTIONS] [LABEL…]

tar –update [–file ARCHIVE] [OPTIONS] [FILE…]

tar –update [-f ARCHIVE] [OPTIONS] [FILE…]

tar {–extract|–get} [-f ARCHIVE] [OPTIONS] [MEMBER…]

NOTE

       This  manpage  is  a short description of GNU tar.  For a detailed discussion, including examples and usage recommendations, refer to the GNU Tar
       Manual available in texinfo format.  If the info reader and the tar documentation are properly installed on your system, the command

           info tar

       should give you access to the complete manual.

       You can also view the manual using the info mode in emacs(1), or find it in various formats online at

           http://www.gnu.org/software/tar/manual

       If any discrepancies occur between this manpage and the GNU Tar Manual, the later shall be considered the authoritative source.

DESCRIPTION

       GNU tar is an archiving program designed to store multiple files in a single file (an archive), and to manipulate such archives.  The archive can
       be either a regular file or a device (e.g. a tape drive, hence the name of the program, which stands for tape archiver), which can be located eiâАР
       ther on the local or on a remote machine.

Option styles

       Options to GNU tar can be given in three different styles.  In traditional style, the first argument is a cluster of option letters and all  subâАР
       sequent arguments supply arguments to those options that require them.  The arguments are read in the same order as the option letters.  Any comâАР
       mand line words that remain after all options has been processed are treated as non-optional arguments: file or archive member names.

       For example, the c option requires creating the archive, the v option requests the verbose operation, and the f option  takes  an  argument  that
       sets the name of the archive to operate upon.  The following command, written in the traditional style, instructs tar to store all files from the
       directory /etc into the archive file etc.tar verbosely listing the files being archived:

       tar cfv etc.tar /etc

       In UNIX or short-option style, each option letter is prefixed with a single dash, as in other command line utilities.  If an option  takes  arguâАР
       ment,  the argument follows it, either as a separate command line word, or immediately following the option.  However, if the option takes an opâАР
       tional argument, the argument must follow the option letter without any intervening whitespace, as in -g/tmp/snar.db.

       Any number of options not taking arguments can be clustered together after a single dash, e.g. -vkp.  Options that take arguments (whether mandaâАР
       tory or optional), can appear at the end of such a cluster, e.g. -vkpf a.tar.

       The example command above written in the short-option style could look like:

       tar -cvf etc.tar /etc
       or
       tar -c -v -f etc.tar /etc

       In  GNU  or  long-option  style, each option begins with two dashes and has a meaningful name, consisting of lower-case letters and dashes.  When
       used, the long option can be abbreviated to its initial letters, provided that this does not create ambiguity.  Arguments  to  long  options  are
       supplied  either as a separate command line word, immediately following the option, or separated from the option by an equals sign with no interâАР
       vening whitespace.  Optional arguments must always use the latter method.

       Here are several ways of writing the example command in this style:

       tar --create --file etc.tar --verbose /etc
       or (abbreviating some options):
       tar --cre --file=etc.tar --verb /etc

       The options in all three styles can be intermixed, although doing so with old options is not encouraged.

Operation mode

       The options listed in the table below tell GNU tar what operation it is to perform.  Exactly one of them must be given.  Meaning of  non-optional
       arguments depends on the operation mode requested.

       -A, --catenate, --concatenate
              Append  archive to the end of another archive.  The arguments are treated as the names of archives to append.  All archives must be of the
              same format as the archive they are appended to, otherwise the resulting archive might be unusable with non-GNU  implementations  of  tar.
              Notice also that when more than one archive is given, the members from archives other than the first one will be accessible in the resultâАР
              ing archive only if using the -i (--ignore-zeros) option.

              Compressed archives cannot be concatenated.

       -c, --create
              Create a new archive.  Arguments supply the names of the files to be archived.  Directories are archived recursively, unless the  --no-reâАР
              cursion option is given.

       -d, --diff, --compare
              Find  differences  between archive and file system.  The arguments are optional and specify archive members to compare.  If not given, the
              current working directory is assumed.

       --delete
              Delete from the archive.  The arguments supply names of the archive members to be removed.  At least one argument must be given.

              This option does not operate on compressed archives.  There is no short option equivalent.

       -r, --append
              Append files to the end of an archive.  Arguments have the same meaning as for -c (--create).

       -t, --list
              List the contents of an archive.  Arguments are optional.  When given, they specify the names of the members to list.

       --test-label
              Test the archive volume label and exit.  When used without arguments, it prints the volume label (if any) and exits with status  0.   When
              one  or  more  command  line  arguments  are given.  tar compares the volume label with each argument.  It exits with code 0 if a match is
              found, and with code 1 otherwise.  No output is displayed, unless used together with the -v (--verbose) option.

              There is no short option equivalent for this option.

       -u, --update
              Append files which are newer than the corresponding copy in the archive.  Arguments have the same meaning as with -c and -r options.   NoâАР
              tice,  that newer files don't replace their old archive copies, but instead are appended to the end of archive.  The resulting archive can
              thus contain several members of the same name, corresponding to various versions of the same file.

       -x, --extract, --get
              Extract files from an archive.  Arguments are optional.  When given, they specify names of the archive members to be extracted.

       --show-defaults
              Show built-in defaults for various tar options and exit.  No arguments are allowed.

       -?, --help
              Display a short option summary and exit.  No arguments allowed.

       --usage
              Display a list of available options and exit.  No arguments allowed.

       --version
              Print program version and copyright information and exit.

OPTIONS

Operation modifiers

       --check-device
              Check device numbers when creating incremental archives (default).

       -g, --listed-incremental=FILE
              Handle new GNU-format incremental backups.  FILE is the name of a snapshot file, where tar stores additional information which is used  to
              decide  which files changed since the previous incremental dump and, consequently, must be dumped again.  If FILE does not exist when creâАР
              ating an archive, it will be created and all files will be added to the resulting archive (the level 0 dump).  To create  incremental  arâАР
              chives of non-zero level N, create a copy of the snapshot file created during the level N-1, and use it as FILE.

              When listing or extracting, the actual contents of FILE is not inspected, it is needed only due to syntactical requirements.  It is thereâАР
              fore common practice to use /dev/null in its place.

       --hole-detection=METHOD
              Use METHOD to detect holes in sparse files.  This option implies --sparse.  Valid values for METHOD are seek and  raw.   Default  is  seek
              with fallback to raw when not applicable.

       -G, --incremental
              Handle old GNU-format incremental backups.

       --ignore-failed-read
              Do not exit with nonzero on unreadable files.

       --level=NUMBER
              Set  dump level for created listed-incremental archive.  Currently only --level=0 is meaningful: it instructs tar to truncate the snapshot
              file before dumping, thereby forcing a level 0 dump.

       -n, --seek
              Assume the archive is seekable.  Normally tar determines automatically whether the archive can be seeked or not.  This option is  intended
              for  use in cases when such recognition fails.  It takes effect only if the archive is open for reading (e.g. with --list or --extract opâАР
              tions).

       --no-check-device
              Do not check device numbers when creating incremental archives.

       --no-seek
              Assume the archive is not seekable.

       --occurrence[=N]
              Process only the Nth occurrence of each file in the archive.  This option is valid only when used with one of the  following  subcommands:
              --delete, --diff, --extract or --list and when a list of files is given either on the command line or via the -T option.  The default N is
              1.

       --restrict
              Disable the use of some potentially harmful options.

       --sparse-version=MAJOR[.MINOR]
              Set version of the sparse format to use (implies --sparse).  This option implies --sparse.  Valid argument values are 0.0, 0.1,  and  1.0.
              For  a detailed discussion of sparse formats, refer to the GNU Tar Manual, appendix D, "Sparse Formats".  Using info reader, it can be acâАР
              cessed running the following command: info tar 'Sparse Formats'.

       -S, --sparse
              Handle sparse files efficiently.  Some files in the file system may have segments which were actually never written (quite often these are
              database  files created by such systems as DBM).  When given this option, tar attempts to determine if the file is sparse prior to archivâАР
              ing it, and if so, to reduce the resulting archive size by not dumping empty parts of the file.

Overwrite control

These options control tar actions when extracting a file over an existing copy on disk.

       -k, --keep-old-files
              Don't replace existing files when extracting.

       --keep-newer-files
              Don't replace existing files that are newer than their archive copies.

       --keep-directory-symlink
              Don't replace existing symlinks to directories when extracting.

       --no-overwrite-dir
              Preserve metadata of existing directories.

       --one-top-level[=DIR]
              Extract all files into DIR, or, if used without argument, into a subdirectory named by the base name of the archive (minus  standard  comâАР
              pression suffixes recognizable by --auto-compress).

       --overwrite
              Overwrite existing files when extracting.

       --overwrite-dir
              Overwrite metadata of existing directories when extracting (default).

       --recursive-unlink
              Recursively remove all files in the directory prior to extracting it.

       --remove-files
              Remove files from disk after adding them to the archive.

       --skip-old-files
              Don't replace existing files when extracting, silently skip over them.

       -U, --unlink-first
              Remove each file prior to extracting over it.

       -W, --verify
              Verify the archive after writing it.

Output stream selection

–ignore-command-error

Ignore subprocess exit codes.

       --no-ignore-command-error
              Treat non-zero exit codes of children as error (default).

       -O, --to-stdout
              Extract files to standard output.

       --to-command=COMMAND
              Pipe  extracted  files to COMMAND.  The argument is the pathname of an external program, optionally with command line arguments.  The proâАР
              gram will be invoked and the contents of the file being extracted supplied to it on its standard input.  Additional data will be  supplied
              via the following environment variables:

              TAR_FILETYPE
                     Type of the file. It is a single letter with the following meaning:

                             f           Regular file
                             d           Directory
                             l           Symbolic link
                             h           Hard link
                             b           Block device
                             c           Character device

                     Currently only regular files are supported.

              TAR_MODE
                     File mode, an octal number.

              TAR_FILENAME
                     The name of the file.

              TAR_REALNAME
                     Name of the file as stored in the archive.

              TAR_UNAME
                     Name of the file owner.

              TAR_GNAME
                     Name of the file owner group.

              TAR_ATIME
                     Time  of  last access. It is a decimal number, representing seconds since the Epoch.  If the archive provides times with nanosecond
                     precision, the nanoseconds are appended to the timestamp after a decimal point.

              TAR_MTIME
                     Time of last modification.

              TAR_CTIME
                     Time of last status change.

              TAR_SIZE
                     Size of the file.

              TAR_UID
                     UID of the file owner.

              TAR_GID
                     GID of the file owner.

              Additionally, the following variables contain information about tar operation mode and the archive being processed:

              TAR_VERSION
                     GNU tar version number.

              TAR_ARCHIVE
                     The name of the archive tar is processing.

              TAR_BLOCKING_FACTOR
                     Current blocking factor, i.e. number of 512-byte blocks in a record.

              TAR_VOLUME
                     Ordinal number of the volume tar is processing (set if reading a multi-volume archive).

              TAR_FORMAT
                     Format of the archive being processed.  One of: gnu, oldgnu, posix, ustar, v7.

              TAR_SUBCOMMAND
                     A short option (with a leading dash) describing the operation tar is executing.

Handling of file attributes

       --atime-preserve[=METHOD]
              Preserve access times on dumped files, either by restoring the times after reading (METHOD=replace, this is the default) or by not setting
              the times in the first place (METHOD=system)

       --delay-directory-restore
              Delay  setting  modification  times and permissions of extracted directories until the end of extraction.  Use this option when extracting
              from an archive which has unusual member ordering.

       --group=NAME[:GID]
              Force NAME as group for added files.  If GID is not supplied, NAME can be either a user name or numeric GID.  In  this  case  the  missing
              part (GID or name) will be inferred from the current host's group database.

              When used with --group-map=FILE, affects only those files whose owner group is not listed in FILE.

       --group-map=FILE
              Read  group translation map from FILE.  Empty lines are ignored.  Comments are introduced with # sign and extend to the end of line.  Each
              non-empty line in FILE defines translation for a single group.  It must consist of two fields, delimited by any amount of whitespace:

              OLDGRP NEWGRP[:NEWGID]

              OLDGRP is either a valid group name or a GID prefixed with +.  Unless NEWGID is supplied, NEWGRP must also be either a valid group name or
              a +GID.  Otherwise, both NEWGRP and NEWGID need not be listed in the system group database.

              As a result, each input file with owner group OLDGRP will be stored in archive with owner group NEWGRP and GID NEWGID.

       --mode=CHANGES
              Force symbolic mode CHANGES for added files.

       --mtime=DATE-OR-FILE
              Set mtime for added files.  DATE-OR-FILE is either a date/time in almost arbitrary format, or the name of an existing file.  In the latter
              case the mtime of that file will be used.

       -m, --touch
              Don't extract file modified time.

       --no-delay-directory-restore
              Cancel the effect of the prior --delay-directory-restore option.

       --no-same-owner
              Extract files as yourself (default for ordinary users).

       --no-same-permissions
              Apply the user's umask when extracting permissions from the archive (default for ordinary users).

       --numeric-owner
              Always use numbers for user/group names.

       --owner=NAME[:UID]
              Force NAME as owner for added files.  If UID is not supplied, NAME can be either a user name or numeric UID.  In  this  case  the  missing
              part (UID or name) will be inferred from the current host's user database.

              When used with --owner-map=FILE, affects only those files whose owner is not listed in FILE.

       --owner-map=FILE
              Read  owner translation map from FILE.  Empty lines are ignored.  Comments are introduced with # sign and extend to the end of line.  Each
              non-empty line in FILE defines translation for a single UID.  It must consist of two fields, delimited by any amount of whitespace:

              OLDUSR NEWUSR[:NEWUID]

              OLDUSR is either a valid user name or a UID prefixed with +.  Unless NEWUID is supplied, NEWUSR must also be either a valid user name or a
              +UID.  Otherwise, both NEWUSR and NEWUID need not be listed in the system user database.

              As a result, each input file owned by OLDUSR will be stored in archive with owner name NEWUSR and UID NEWUID.

       -p, --preserve-permissions, --same-permissions
              extract information about file permissions (default for superuser)

       --same-owner
              Try extracting files with the same ownership as exists in the archive (default for superuser).

       -s, --preserve-order, --same-order
              Sort names to extract to match archive

       --sort=ORDER
              When creating an archive, sort directory entries according to ORDER, which is one of none, name, or inode.

              The default is --sort=none, which stores archive members in the same order as returned by the operating system.

              Using --sort=name ensures the member ordering in the created archive is uniform and reproducible.

              Using  --sort=inode  reduces the number of disk seeks made when creating the archive and thus can considerably speed up archivation.  This
              sorting order is supported only if the underlying system provides the necessary information.

Extended file attributes

–acls Enable POSIX ACLs support.

       --no-acls
              Disable POSIX ACLs support.

       --selinux
              Enable SELinux context support.

       --no-selinux
              Disable SELinux context support.

       --xattrs
              Enable extended attributes support.

       --no-xattrs
              Disable extended attributes support.

       --xattrs-exclude=PATTERN
              Specify the exclude pattern for xattr keys.  PATTERN is a POSIX regular expression, e.g. --xattrs-exclude='^user.', to exclude  attributes
              from the user namespace.

       --xattrs-include=PATTERN
              Specify the include pattern for xattr keys.  PATTERN is a POSIX regular expression.

Device selection and switching

       -f, --file=ARCHIVE
              Use  archive  file or device ARCHIVE.  If this option is not given, tar will first examine the environment variable `TAPE'.  If it is set,
              its value will be used as the archive name.  Otherwise, tar will assume the compiled-in default.  The default value can be  inspected  eiâАР
              ther using the --show-defaults option, or at the end of the tar --help output.

              An  archive name that has a colon in it specifies a file or device on a remote machine.  The part before the colon is taken as the machine
              name or IP address, and the part after it as the file or device pathname, e.g.:

              --file=remotehost:/dev/sr0

              An optional username can be prefixed to the hostname, placing a @ sign between them.

              By default, the remote host is accessed via the rsh(1) command.  Nowadays it is common to use ssh(1) instead.  You can do so by giving the
              following command line option:

              --rsh-command=/usr/bin/ssh

              The  remote  machine should have the rmt(8) command installed.  If its pathname does not match tar's default, you can inform tar about the
              correct pathname using the --rmt-command option.

       --force-local
              Archive file is local even if it has a colon.

       -F, --info-script=COMMAND, --new-volume-script=COMMAND
              Run COMMAND at the end of each tape (implies -M).  The command can include arguments.  When started, it  will  inherit  tar's  environment
              plus the following variables:

              TAR_VERSION
                     GNU tar version number.

              TAR_ARCHIVE
                     The name of the archive tar is processing.

              TAR_BLOCKING_FACTOR
                     Current blocking factor, i.e. number of 512-byte blocks in a record.

              TAR_VOLUME
                     Ordinal number of the volume tar is processing (set if reading a multi-volume archive).

              TAR_FORMAT
                     Format of the archive being processed.  One of: gnu, oldgnu, posix, ustar, v7.

              TAR_SUBCOMMAND
                     A short option (with a leading dash) describing the operation tar is executing.

              TAR_FD File descriptor which can be used to communicate the new volume name to tar.

              If the info script fails, tar exits; otherwise, it begins writing the next volume.

       -L, --tape-length=N
              Change  tape after writing Nx1024 bytes.  If N is followed by a size suffix (see the subsection Size suffixes below), the suffix specifies
              the multiplicative factor to be used instead of 1024.

              This option implies -M.

       -M, --multi-volume
              Create/list/extract multi-volume archive.

       --rmt-command=COMMAND
              Use COMMAND instead of rmt when accessing remote archives.  See the description of the -f option, above.

       --rsh-command=COMMAND
              Use COMMAND instead of rsh when accessing remote archives.  See the description of the -f option, above.

       --volno-file=FILE
              When this option is used in conjunction with --multi-volume, tar will keep track of which volume of a multi-volume archive it  is  working
              in FILE.

Device blocking

       -b, --blocking-factor=BLOCKS
              Set record size to BLOCKSx512 bytes.

       -B, --read-full-records
              When listing or extracting, accept incomplete input records after end-of-file marker.

       -i, --ignore-zeros
              Ignore zeroed blocks in archive.  Normally two consecutive 512-blocks filled with zeroes mean EOF and tar stops reading after encountering
              them.  This option instructs it to read further and is useful when reading archives created with the -A option.

       --record-size=NUMBER
              Set record size.  NUMBER is the number of bytes per record.  It must be multiple of 512.  It can can be suffixed with a size suffix,  e.g.
              --record-size=10K, for 10 Kilobytes.  See the subsection Size suffixes, for a list of valid suffixes.

Archive format selection

       -H, --format=FORMAT
              Create archive of the given format.  Valid formats are:

              gnu    GNU tar 1.13.x format

              oldgnu GNU format as per tar <= 1.12.

              pax, posix
                     POSIX 1003.1-2001 (pax) format.

              ustar  POSIX 1003.1-1988 (ustar) format.

              v7     Old V7 tar format.

       --old-archive, --portability
              Same as --format=v7.

       --pax-option=keyword[[:]=value][,keyword[[:]=value]]...
              Control pax keywords when creating PAX archives (-H pax).  This option is equivalent to the -o option of the pax(1) utility.

       --posix
              Same as --format=posix.

       -V, --label=TEXT
              Create archive with volume name TEXT.  If listing or extracting, use TEXT as a globbing pattern for volume name.

Compression options

       -a, --auto-compress
              Use archive suffix to determine the compression program.

       -I, --use-compress-program=COMMAND
              Filter data through COMMAND.  It must accept the -d option, for decompression.  The argument can contain command line options.

       -j, --bzip2
              Filter the archive through bzip2(1).

       -J, --xz
              Filter the archive through xz(1).

       --lzip Filter the archive through lzip(1).

       --lzma Filter the archive through lzma(1).

       --lzop Filter the archive through lzop(1).

       --no-auto-compress
              Do not use archive suffix to determine the compression program.

       -z, --gzip, --gunzip, --ungzip
              Filter the archive through gzip(1).

       -Z, --compress, --uncompress
              Filter the archive through compress(1).

       --zstd Filter the archive through zstd(1).

Local file selection

       --add-file=FILE
              Add FILE to the archive (useful if its name starts with a dash).

       --backup[=CONTROL]
              Backup before removal.  The CONTROL argument, if supplied, controls the backup policy.  Its valid values are:

              none, off
                     Never make backups.

              t, numbered
                     Make numbered backups.

              nil, existing
                     Make numbered backups if numbered backups exist, simple backups otherwise.

              never, simple
                     Always make simple backups

              If CONTROL is not given, the value is taken from the VERSION_CONTROL environment variable.  If it is not set, existing is assumed.

       -C, --directory=DIR
              Change to DIR before performing any operations.  This option is order-sensitive, i.e. it affects all options that follow.

       --exclude=PATTERN
              Exclude files matching PATTERN, a glob(3)-style wildcard pattern.

       --exclude-backups
              Exclude backup and lock files.

       --exclude-caches
              Exclude contents of directories containing file CACHEDIR.TAG, except for the tag file itself.

       --exclude-caches-all
              Exclude directories containing file CACHEDIR.TAG and the file itself.

       --exclude-caches-under
              Exclude everything under directories containing CACHEDIR.TAG

       --exclude-ignore=FILE
              Before  dumping a directory, see if it contains FILE.  If so, read exclusion patterns from this file.  The patterns affect only the direcâАР
              tory itself.

       --exclude-ignore-recursive=FILE
              Same as --exclude-ignore, except that patterns from FILE affect both the directory and all its subdirectories.

       --exclude-tag=FILE
              Exclude contents of directories containing FILE, except for FILE itself.

       --exclude-tag-all=FILE
              Exclude directories containing FILE.

       --exclude-tag-under=FILE
              Exclude everything under directories containing FILE.

       --exclude-vcs
              Exclude version control system directories.

       --exclude-vcs-ignores
              Exclude files that match patterns read from VCS-specific ignore files.  Supported  files  are:  .cvsignore,  .gitignore,  .bzrignore,  and
              .hgignore.

       -h, --dereference
              Follow symlinks; archive and dump the files they point to.

       --hard-dereference
              Follow hard links; archive and dump the files they refer to.

       -K, --starting-file=MEMBER
              Begin at the given member in the archive.

       --newer-mtime=DATE
              Work on files whose data changed after the DATE.  If DATE starts with / or . it is taken to be a file name; the mtime of that file is used
              as the date.

       --no-null
              Disable the effect of the previous --null option.

       --no-recursion
              Avoid descending automatically in directories.

       --no-unquote
              Do not unquote input file or member names.

       --no-verbatim-files-from
              Treat each line read from a file list as if it were supplied in the command line.  I.e., leading and trailing whitespace is  removed  and,
              if the resulting string begins with a dash, it is treated as tar command line option.

              This is the default behavior.  The --no-verbatim-files-from option is provided as a way to restore it after --verbatim-files-from option.

              This  option is positional: it affects all --files-from options that occur after it in, until --verbatim-files-from option or end of line,
              whichever occurs first.

              It is implied by the --no-null option.

       --null Instruct subsequent -T options to read null-terminated names verbatim (disables special handling of names that start with a dash).

              See also --verbatim-files-from.

       -N, --newer=DATE, --after-date=DATE
              Only store files newer than DATE.  If DATE starts with / or . it is taken to be a file name; the mtime of that file is used as the date.

       --one-file-system
              Stay in local file system when creating archive.

       -P, --absolute-names
              Don't strip leading slashes from file names when creating archives.

       --recursion
              Recurse into directories (default).

       --suffix=STRING
              Backup before removal, override usual suffix.  Default suffix is ~, unless overridden by environment variable SIMPLE_BACKUP_SUFFIX.

       -T, --files-from=FILE
              Get names to extract or create from FILE.

              Unless specified otherwise, the FILE must contain a list of names separated by ASCII LF (i.e. one name per line).  The names read are hanâАР
              dled  the  same way as command line arguments.  They undergo quote removal and word splitting, and any string that starts with a - is hanâАР
              dled as tar command line option.

              If this behavior is undesirable, it can be turned off using the --verbatim-files-from option.

              The --null option instructs tar that the names in FILE are separated by ASCII NUL character, instead of LF.  It is useful if the  list  is
              generated by find(1) -print0 predicate.

       --unquote
              Unquote file or member names (default).

       --verbatim-files-from
              Treat  each  line  obtained from a file list as a file name, even if it starts with a dash.  File lists are supplied with the --files-from
              (-T) option.  The default behavior is to handle names supplied in file lists as if they were typed in the command  line,  i.e.  any  names
              starting with a dash are treated as tar options.  The --verbatim-files-from option disables this behavior.

              This  option  affects  all  --files-from  options  that  occur  after  it  in the command line.  Its effect is reverted by the --no-verbaâАР
              tim-files-from} option.

              This option is implied by the --null option.

              See also --add-file.

       -X, --exclude-from=FILE
              Exclude files matching patterns listed in FILE.

File name transformations

       --strip-components=NUMBER
              Strip NUMBER leading components from file names on extraction.

       --transform=EXPRESSION, --xform=EXPRESSION
              Use sed replace EXPRESSION to transform file names.

File name matching options

These options affect both exclude and include patterns.

       --anchored
              Patterns match file name start.

       --ignore-case
              Ignore case.

       --no-anchored
              Patterns match after any / (default for exclusion).

       --no-ignore-case
              Case sensitive matching (default).

       --no-wildcards
              Verbatim string matching.

       --no-wildcards-match-slash
              Wildcards do not match /.

       --wildcards
              Use wildcards (default for exclusion).

       --wildcards-match-slash
              Wildcards match / (default for exclusion).

Informative output

       --checkpoint[=N]
              Display progress messages every Nth record (default 10).

       --checkpoint-action=ACTION
              Run ACTION on each checkpoint.

       --clamp-mtime
              Only set time when the file is more recent than what was given with --mtime.

       --full-time
              Print file time to its full resolution.

       --index-file=FILE
              Send verbose output to FILE.

       -l, --check-links
              Print a message if not all links are dumped.

       --no-quote-chars=STRING
              Disable quoting for characters from STRING.

       --quote-chars=STRING
              Additionally quote characters from STRING.

       --quoting-style=STYLE
              Set quoting style for file and member names.  Valid values for STYLE are literal, shell, shell-always, c, c-maybe,  escape,  locale,  cloâАР
              cale.

       -R, --block-number
              Show block number within archive with each message.

       --show-omitted-dirs
              When listing or extracting, list each directory that does not match search criteria.

       --show-transformed-names, --show-stored-names
              Show file or archive names after transformation by --strip and --transform options.

       --totals[=SIGNAL]
              Print  total  bytes  after  processing the archive.  If SIGNAL is given, print total bytes when this signal is delivered.  Allowed signals
              are: SIGHUP, SIGQUIT, SIGINT, SIGUSR1, and SIGUSR2.  The SIG prefix can be omitted.

       --utc  Print file modification times in UTC.

       -v, --verbose
              Verbosely list files processed.  Each instance of this option on the command line increases the verbosity level by one.  The maximum  verâАР
              bosity level is 3.  For a detailed discussion of how various verbosity levels affect tar's output, please refer to GNU Tar Manual, subsecâАР
              tion 2.5.1 "The --verbose Option".

       --warning=KEYWORD
              Enable or disable warning messages identified by KEYWORD.  The messages are suppressed if KEYWORD is prefixed with no- and enabled  otherâАР
              wise.

              Multiple --warning messages accumulate.

              Keywords controlling general tar operation:

              all    Enable all warning messages.  This is the default.

              none   Disable all warning messages.

              filename-with-nuls
                     "%s: file name read contains nul character"

              alone-zero-block
                     "A lone zero block at %s"

              Keywords applicable for tar --create:

              cachedir
                     "%s: contains a cache directory tag %s; %s"

              file-shrank
                     "%s: File shrank by %s bytes; padding with zeros"

              xdev   "%s: file is on a different filesystem; not dumped"

              file-ignored
                     "%s: Unknown file type; file ignored"
                     "%s: socket ignored"
                     "%s: door ignored"

              file-unchanged
                     "%s: file is unchanged; not dumped"

              ignore-archive
                     "%s: file is the archive; not dumped"

              file-removed
                     "%s: File removed before we read it"

              file-changed
                     "%s: file changed as we read it"

              failed-read
                     Suppresses warnings about unreadable files or directories. This keyword applies only if used together with the --ignore-failed-read
                     option.

              Keywords applicable for tar --extract:

              existing-file
                     "%s: skipping existing file"

              timestamp
                     "%s: implausibly old time stamp %s"
                     "%s: time stamp %s is %s s in the future"

              contiguous-cast
                     "Extracting contiguous files as regular files"

              symlink-cast
                     "Attempting extraction of symbolic links as hard links"

              unknown-cast
                     "%s: Unknown file type '%c', extracted as normal file"

              ignore-newer
                     "Current %s is newer or same age"

              unknown-keyword
                     "Ignoring unknown extended header keyword '%s'"

              decompress-program
                     Controls verbose description of failures occurring when trying to run alternative decompressor programs.  This warning is  disabled
                     by default (unless --verbose is used).  A common example of what you can get when using this warning is:

                     $ tar --warning=decompress-program -x -f archive.Z
                     tar (child): cannot run compress: No such file or directory
                     tar (child): trying gzip

                     This means that tar first tried to decompress archive.Z using compress, and, when that failed, switched to gzip.

              record-size
                     "Record size = %lu blocks"

              Keywords controlling incremental extraction:

              rename-directory
                     "%s: Directory has been renamed from %s"
                     "%s: Directory has been renamed"

              new-directory
                     "%s: Directory is new"

              xdev   "%s: directory is on a different device: not purging"

              bad-dumpdir
                     "Malformed dumpdir: 'X' never used"

       -w, --interactive, --confirmation
              Ask for confirmation for every action.

Compatibility options

-o When creating, same as –old-archive. When extracting, same as –no-same-owner.

Size suffixes

               Suffix    Units                   Byte Equivalent
               b         Blocks                  SIZE x 512
               B         Kilobytes               SIZE x 1024
               c         Bytes                   SIZE
               G         Gigabytes               SIZE x 1024^3
               K         Kilobytes               SIZE x 1024
               k         Kilobytes               SIZE x 1024
               M         Megabytes               SIZE x 1024^2
               P         Petabytes               SIZE x 1024^5
               T         Terabytes               SIZE x 1024^4
               w         Words                   SIZE x 2

RETURN VALUE

Tar exit code indicates whether it was able to successfully perform the requested operation, and if not, what kind of error occurred.

0 Successful termination.

       1      Some files differ.  If tar was invoked with the --compare (--diff, -d) command line option, this means that some files in the archive difâАР
              fer from their disk counterparts.  If tar was given one of the --create, --append or --update options, this  exit  code  means  that  some
              files were changed while being archived and so the resulting archive does not contain the exact copy of the file set.

       2      Fatal error.  This means that some fatal, unrecoverable error occurred.

       If  a subprocess that had been invoked by tar exited with a nonzero exit code, tar itself exits with that code as well.  This can happen, for exâАР
       ample, if a compression option (e.g. -z) was used and the external compressor program failed.  Another example is rmt failure during backup to  a
       remote device.

SEE ALSO

bzip2(1), compress(1), gzip(1), lzma(1), lzop(1), rmt(8), symlink(7), xz(1), zstd(1).

Complete tar manual: run info tar or use emacs(1) info mode to read it.

Online copies of GNU tar documentation in various formats can be found at:

http://www.gnu.org/software/tar/manual

BUG REPORTS

Report bugs to <bug-tar@gnu.org>.

COPYRIGHT

       Copyright © 2013-2019 Free Software Foundation, Inc.
       License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
       This is free software: you are free to change and redistribute it.  There is NO WARRANTY, to the extent permitted by law.

TAR                                                                   July 13, 2020                                                               TAR(1)
Categories
Linux manpage

manpage awk/gawk

GAWK(1) Utility Commands GAWK(1)

NAME
gawk – pattern scanning and processing language

SYNOPSIS
gawk [ POSIX or GNU style options ] -f program-file [ — ] file …
gawk [ POSIX or GNU style options ] [ — ] program-text file …

DESCRIPTION
Gawk is the GNU Project’s implementation of the AWK programming language. It conforms to the definition of the language in the POSIX 1003.1 standard. This version in turn is based on the description in The AWK Programming Language, by Aho, Kernighan, and Weinberger. Gawk provides the additional features found in the current version of Brian Kernighan’s awk and numerous GNU-specific extensions.

The command line consists of options to gawk itself, the AWK program text (if not supplied via the -f or –include options), and values to be made available in the ARGC and ARGV pre-defined AWK variables.

When gawk is invoked with the –profile option, it starts gathering profiling statistics from the execution of the program. Gawk runs more slowly in this mode, and automatically produces an execution profile in the file awkprof.out when done. See the –profile option, below.

Gawk also has an integrated debugger. An interactive debugging session can be started by supplying the –debug option to the command line. In this mode of execution, gawk loads the AWK source code and then prompts for debugging commands. Gawk can only debug AWK program source provided with the -f and –include options. The debugger is documented in GAWK: Effective AWK Programming.

OPTION FORMAT
Gawk options may be either traditional POSIX-style one letter options, or GNU-style long options. POSIX options start with a single “-”, while long options start with “–”. Long options are provided for both GNU-specific features and for POSIX-mandated features.

Gawk-specific options are typically used in long-option form. Arguments to long options are either joined with the option by an = sign, with no intervening spaces, or they may be provided in the next command line argument. Long options may be abbreviated, as long as the abbreviation remains unique.

Additionally, every long option has a corresponding short option, so that the option’s functionality may be used from within #! executable scripts.

OPTIONS
Gawk accepts the following options. Standard options are listed first, followed by options for gawk extensions, listed alphabetically by short option.

-f program-file
–file program-file
Read the AWK program source from the file program-file, instead of from the first command line argument. Multiple -f (or –file) options may be used. Files read with -f are treated as if they begin with an implicit @namespace “awk” statement.

-F fs
–field-separator fs
Use fs for the input field separator (the value of the FS predefined variable).

-v var=val
–assign var=val
Assign the value val to the variable var, before execution of the program begins. Such variable values are available to the BEGIN rule of an AWK program.

-b
–characters-as-bytes
Treat all input data as single-byte characters. In other words, don’t pay any attention to the locale information when attempting to process strings as multibyte characters. The –posix option overrides this one.

-c
–traditional
Run in compatibility mode. In compatibility mode, gawk behaves identically to Brian Kernighan’s awk; none of the GNU-specific extensions are recognized. See GNU EXTENSIONS, below, for more information.

-C
–copyright
Print the short version of the GNU copyright information message on the standard output and exit successfully.

-d[file]
–dump-variables[=file]
Print a sorted list of global variables, their types and final values to file. If no file is provided, gawk uses a file named awkvars.out in the current directory.
Having a list of all the global variables is a good way to look for typographical errors in your programs. You would also use this option if you have a large program with a lot of functions, and you want to be sure that your functions don’t inadvertently use global variables that you meant to be local. (This is a particularly easy mistake to make with simple variable names like i, j, and so on.)

-D[file]
–debug[=file]
Enable debugging of AWK programs. By default, the debugger reads commands interactively from the keyboard (standard input). The optional file argument specifies a file with a list of commands for the debugger to execute non-interactively.

-e program-text
–source program-text
Use program-text as AWK program source code. This option allows the easy intermixing of library functions (used via the -f and –include options) with source code entered on the command line. It is intended primarily for medium to large AWK programs used in shell scripts. Each argument supplied via -e is treated as if it begins with an implicit @namespace “awk” statement.

-E file
–exec file
Similar to -f, however, this is option is the last one processed. This should be used with #! scripts, particularly for CGI applications, to avoid passing in options or source code (!) on the command line from a URL. This option disables command-line variable assignments.

-g
–gen-pot
Scan and parse the AWK program, and generate a GNU .pot (Portable Object Template) format file on standard output with entries for all localizable strings in the program. The program itself is not executed. See the GNU gettext distribution for more information on .pot files.

-h
–help Print a relatively short summary of the available options on the standard output. (Per the GNU Coding Standards, these options cause an immediate, successful exit.)

-i include-file
–include include-file
Load an awk source library. This searches for the library using the AWKPATH environment variable. If the initial search fails, another attempt will be made after appending the .awk suffix. The file will be loaded only once (i.e., duplicates are eliminated), and the code does not constitute the main program source. Files read with –include are treated as if they begin with an implicit @namespace
“awk” statement.

-l lib
–load lib
Load a gawk extension from the shared library lib. This searches for the library using the AWKLIBPATH environment variable. If the initial search fails, another attempt will be made after appending the default shared library suffix for the platform. The library initialization routine is expected to be named dl_load().

-L [value]
–lint[=value]
Provide warnings about constructs that are dubious or non-portable to other AWK implementations. With an optional argument of fatal, lint warnings become fatal errors. This may be drastic, but its use will certainly encourage the development of cleaner AWK programs. With an optional argument of invalid, only warnings about things that are actually invalid are issued. (This is not fully imple‐
mented yet.) With an optional argument of no-ext, warnings about gawk extensions are disabled.

-M
–bignum
Force arbitrary precision arithmetic on numbers. This option has no effect if gawk is not compiled to use the GNU MPFR and GMP libraries. (In such a case, gawk issues a warning.)

-n
–non-decimal-data
Recognize octal and hexadecimal values in input data. Use this option with great caution!

-N
–use-lc-numeric
Force gawk to use the locale’s decimal point character when parsing input data. Although the POSIX standard requires this behavior, and gawk does so when –posix is in effect, the default is to follow traditional behavior and use a period as the decimal point, even in locales where the period is not the decimal point character. This option overrides the default behavior, without the full draconian
strictness of the –posix option.

-o[file]
–pretty-print[=file]
Output a pretty printed version of the program to file. If no file is provided, gawk uses a file named awkprof.out in the current directory. This option implies –no-optimize.

-O
–optimize
Enable gawk’s default optimizations upon the internal representation of the program. Currently, this just includes simple constant folding. This option is on by default.

-p[prof-file]
–profile[=prof-file]
Start a profiling session, and send the profiling data to prof-file. The default is awkprof.out. The profile contains execution counts of each statement in the program in the left margin and function call counts for each user-defined function. This option implies –no-optimize.

-P
–posix
This turns on compatibility mode, with the following additional restrictions:

• \x escape sequences are not recognized.

• You cannot continue lines after ? and :.

• The synonym func for the keyword function is not recognized.

• The operators ** and **= cannot be used in place of ^ and ^=.

-r
–re-interval
Enable the use of interval expressions in regular expression matching (see Regular Expressions, below). Interval expressions were not traditionally available in the AWK language. The POSIX standard added them, to make awk and egrep consistent with each other. They are enabled by default, but this option remains for use together with –traditional.

-s
–no-optimize
Disable gawk’s default optimizations upon the internal representation of the program.

-S
–sandbox
Run gawk in sandbox mode, disabling the system() function, input redirection with getline, output redirection with print and printf, and loading dynamic extensions. Command execution (through pipelines) is also disabled. This effectively blocks a script from accessing local resources, except for the files specified on the command line.

-t
–lint-old
Provide warnings about constructs that are not portable to the original version of UNIX awk.

-V
–version
Print version information for this particular copy of gawk on the standard output. This is useful mainly for knowing if the current copy of gawk on your system is up to date with respect to whatever the Free Software Foundation is distributing. This is also useful when reporting bugs. (Per the GNU Coding Standards, these options cause an immediate, successful exit.)

— Signal the end of options. This is useful to allow further arguments to the AWK program itself to start with a “-”. This provides consistency with the argument parsing convention used by most other POSIX programs.

In compatibility mode, any other options are flagged as invalid, but are otherwise ignored. In normal operation, as long as program text has been supplied, unknown options are passed on to the AWK program in the ARGV array for processing. This is particularly useful for running AWK programs via the #! executable interpreter mechanism.

For POSIX compatibility, the -W option may be used, followed by the name of a long option.

AWK PROGRAM EXECUTION
An AWK program consists of a sequence of optional directives, pattern-action statements, and optional function definitions.

@include “filename”
@load “filename”
@namespace “name”
pattern { action statements }
function name(parameter list) { statements }

Gawk first reads the program source from the program-file(s) if specified, from arguments to –source, or from the first non-option argument on the command line. The -f and –source options may be used multiple times on the command line. Gawk reads the program text as if all the program-files and command line source texts had been concatenated together. This is useful for building libraries of AWK func‐
tions, without having to include them in each new AWK program that uses them. It also provides the ability to mix library functions with command line programs.

In addition, lines beginning with @include may be used to include other source files into your program, making library use even easier. This is equivalent to using the –include option.

Lines beginning with @load may be used to load extension functions into your program. This is equivalent to using the –load option.

The environment variable AWKPATH specifies a search path to use when finding source files named with the -f and –include options. If this variable does not exist, the default path is “.:/usr/local/share/awk”. (The actual directory may vary, depending upon how gawk was built and installed.) If a file name given to the -f option contains a “/” character, no path search is performed.

The environment variable AWKLIBPATH specifies a search path to use when finding source files named with the –load option. If this variable does not exist, the default path is “/usr/local/lib/gawk”. (The actual directory may vary, depending upon how gawk was built and installed.)

Gawk executes AWK programs in the following order. First, all variable assignments specified via the -v option are performed. Next, gawk compiles the program into an internal form. Then, gawk executes the code in the BEGIN rule(s) (if any), and then proceeds to read each file named in the ARGV array (up to ARGV[ARGC-1]). If there are no files named on the command line, gawk reads the standard input.

If a filename on the command line has the form var=val it is treated as a variable assignment. The variable var will be assigned the value val. (This happens after any BEGIN rule(s) have been run.) Command line variable assignment is most useful for dynamically assigning values to the variables AWK uses to control how input is broken into fields and records. It is also useful for controlling state if
multiple passes are needed over a single data file.

If the value of a particular element of ARGV is empty (“”), gawk skips over it.

For each input file, if a BEGINFILE rule exists, gawk executes the associated code before processing the contents of the file. Similarly, gawk executes the code associated with ENDFILE after processing the file.

For each record in the input, gawk tests to see if it matches any pattern in the AWK program. For each pattern that the record matches, gawk executes the associated action. The patterns are tested in the order they occur in the program.

Finally, after all the input is exhausted, gawk executes the code in the END rule(s) (if any).

Command Line Directories
According to POSIX, files named on the awk command line must be text files. The behavior is “undefined” if they are not. Most versions of awk treat a directory on the command line as a fatal error.

Starting with version 4.0 of gawk, a directory on the command line produces a warning, but is otherwise skipped. If either of the –posix or –traditional options is given, then gawk reverts to treating directories on the command line as a fatal error.

VARIABLES, RECORDS AND FIELDS
AWK variables are dynamic; they come into existence when they are first used. Their values are either floating-point numbers or strings, or both, depending upon how they are used. Additionally, gawk allows variables to have regular-expression type. AWK also has one dimensional arrays; arrays with multiple dimensions may be simulated. Gawk provides true arrays of arrays; see Arrays, below. Several pre-
defined variables are set as a program runs; these are described as needed and summarized below.

Records
Normally, records are separated by newline characters. You can control how records are separated by assigning values to the built-in variable RS. If RS is any single character, that character separates records. Otherwise, RS is a regular expression. Text in the input that matches this regular expression separates the record. However, in compatibility mode, only the first character of its string value
is used for separating records. If RS is set to the null string, then records are separated by empty lines. When RS is set to the null string, the newline character always acts as a field separator, in addition to whatever value FS may have.

Fields
As each input record is read, gawk splits the record into fields, using the value of the FS variable as the field separator. If FS is a single character, fields are separated by that character. If FS is the null string, then each individual character becomes a separate field. Otherwise, FS is expected to be a full regular expression. In the special case that FS is a single space, fields are separated
by runs of spaces and/or tabs and/or newlines. NOTE: The value of IGNORECASE (see below) also affects how fields are split when FS is a regular expression, and how records are separated when RS is a regular expression.

If the FIELDWIDTHS variable is set to a space-separated list of numbers, each field is expected to have fixed width, and gawk splits up the record using the specified widths. Each field width may optionally be preceded by a colon-separated value specifying the number of characters to skip before the field starts. The value of FS is ignored. Assigning a new value to FS or FPAT overrides the use of FIELD‐
WIDTHS.

Similarly, if the FPAT variable is set to a string representing a regular expression, each field is made up of text that matches that regular expression. In this case, the regular expression describes the fields themselves, instead of the text that separates the fields. Assigning a new value to FS or FIELDWIDTHS overrides the use of FPAT.

Each field in the input record may be referenced by its position: $1, $2, and so on. $0 is the whole record, including leading and trailing whitespace. Fields need not be referenced by constants:

n = 5
print $n

prints the fifth field in the input record.

The variable NF is set to the total number of fields in the input record.

References to non-existent fields (i.e., fields after $NF) produce the null string. However, assigning to a non-existent field (e.g., $(NF+2) = 5) increases the value of NF, creates any intervening fields with the null string as their values, and causes the value of $0 to be recomputed, with the fields being separated by the value of OFS. References to negative numbered fields cause a fatal error.
Decrementing NF causes the values of fields past the new value to be lost, and the value of $0 to be recomputed, with the fields being separated by the value of OFS.

Assigning a value to an existing field causes the whole record to be rebuilt when $0 is referenced. Similarly, assigning a value to $0 causes the record to be resplit, creating new values for the fields.

Built-in Variables
Gawk’s built-in variables are:

ARGC The number of command line arguments (does not include options to gawk, or the program source).

ARGIND The index in ARGV of the current file being processed.

ARGV Array of command line arguments. The array is indexed from 0 to ARGC – 1. Dynamically changing the contents of ARGV can control the files used for data.

BINMODE On non-POSIX systems, specifies use of “binary” mode for all file I/O. Numeric values of 1, 2, or 3, specify that input files, output files, or all files, respectively, should use binary I/O. String values of “r”, or “w” specify that input files, or output files, respectively, should use binary I/O. String values of “rw” or “wr” specify that all files should use binary I/O. Any other string
value is treated as “rw”, but generates a warning message.

CONVFMT The conversion format for numbers, “%.6g”, by default.

ENVIRON An array containing the values of the current environment. The array is indexed by the environment variables, each element being the value of that variable (e.g., ENVIRON[“HOME”] might be “/home/arnold”).

In POSIX mode, changing this array does not affect the environment seen by programs which gawk spawns via redirection or the system() function. Otherwise, gawk updates its real environment so that programs it spawns see the changes.

ERRNO If a system error occurs either doing a redirection for getline, during a read for getline, or during a close(), then ERRNO is set to a string describing the error. The value is subject to translation in non-English locales. If the string in ERRNO corresponds to a system error in the errno(3) variable, then the numeric value can be found in PROCINFO[“errno”]. For non-system errors,
PROCINFO[“errno”] will be zero.

FIELDWIDTHS A whitespace-separated list of field widths. When set, gawk parses the input into fields of fixed width, instead of using the value of the FS variable as the field separator. Each field width may optionally be preceded by a colon-separated value specifying the number of characters to skip before the field starts. See Fields, above.

FILENAME The name of the current input file. If no files are specified on the command line, the value of FILENAME is “-”. However, FILENAME is undefined inside the BEGIN rule (unless set by getline).

FNR The input record number in the current input file.

FPAT A regular expression describing the contents of the fields in a record. When set, gawk parses the input into fields, where the fields match the regular expression, instead of using the value of FS as the field separator. See Fields, above.

FS The input field separator, a space by default. See Fields, above.

FUNCTAB An array whose indices and corresponding values are the names of all the user-defined or extension functions in the program. NOTE: You may not use the delete statement with the FUNCTAB array.

IGNORECASE Controls the case-sensitivity of all regular expression and string operations. If IGNORECASE has a non-zero value, then string comparisons and pattern matching in rules, field splitting with FS and FPAT, record separating with RS, regular expression matching with ~ and !~, and the gensub(), gsub(), index(), match(), patsplit(), split(), and sub() built-in functions all ignore case when doing
regular expression operations. NOTE: Array subscripting is not affected. However, the asort() and asorti() functions are affected.
Thus, if IGNORECASE is not equal to zero, /aB/ matches all of the strings “ab”, “aB”, “Ab”, and “AB”. As with all AWK variables, the initial value of IGNORECASE is zero, so all regular expression and string operations are normally case-sensitive.

LINT Provides dynamic control of the –lint option from within an AWK program. When true, gawk prints lint warnings. When false, it does not. The values allowed for the –lint option may also be assigned to LINT, with the same effects. Any other true value just prints warnings.

NF The number of fields in the current input record.

NR The total number of input records seen so far.

OFMT The output format for numbers, “%.6g”, by default.

OFS The output field separator, a space by default.

ORS The output record separator, by default a newline.

PREC The working precision of arbitrary precision floating-point numbers, 53 by default.

PROCINFO The elements of this array provide access to information about the running AWK program. On some systems, there may be elements in the array, “group1” through “groupn” for some n, which is the number of supplementary groups that the process has. Use the in operator to test for these elements. The following elements are guaranteed to be available:

PROCINFO[“argv”] The command line arguments as received by gawk at the C-language level. The subscripts start from zero.

PROCINFO[“egid”] The value of the getegid(2) system call.

PROCINFO[“errno”] The value of errno(3) when ERRNO is set to the associated error message.

PROCINFO[“euid”] The value of the geteuid(2) system call.

PROCINFO[“FS”] “FS” if field splitting with FS is in effect, “FPAT” if field splitting with FPAT is in effect, “FIELDWIDTHS” if field splitting with FIELDWIDTHS is in effect, or “API” if API input parser field splitting is in effect.

PROCINFO[“gid”] The value of the getgid(2) system call.

PROCINFO[“identifiers”]
A subarray, indexed by the names of all identifiers used in the text of the AWK program. The values indicate what gawk knows about the identifiers after it has finished parsing the program; they are not updated while the program runs. For each identifier, the value of the element is one of the following:

“array” The identifier is an array.

“builtin” The identifier is a built-in function.

“extension” The identifier is an extension function loaded via @load or –load.

“scalar” The identifier is a scalar.

“untyped” The identifier is untyped (could be used as a scalar or array, gawk doesn’t know yet).

“user” The identifier is a user-defined function.

PROCINFO[“pgrpid”] The value of the getpgrp(2) system call.

PROCINFO[“pid”] The value of the getpid(2) system call.

PROCINFO[“platform”] A string indicating the platform for which gawk was compiled. It is one of:

“djgpp”, “mingw”
Microsoft Windows, using either DJGPP, or MinGW, respectively.

“os2” OS/2.

“posix”
GNU/Linux, Cygwin, Mac OS X, and legacy Unix systems.

“vms” OpenVMS or Vax/VMS.

PROCINFO[“ppid”] The value of the getppid(2) system call.

PROCINFO[“strftime”] The default time format string for strftime(). Changing its value affects how strftime() formats time values when called with no arguments.

PROCINFO[“uid”] The value of the getuid(2) system call.

PROCINFO[“version”] The version of gawk.

The following elements are present if loading dynamic extensions is available:

PROCINFO[“api_major”]
The major version of the extension API.

PROCINFO[“api_minor”]
The minor version of the extension API.

The following elements are available if MPFR support is compiled into gawk:

PROCINFO[“gmp_version”]
The version of the GNU GMP library used for arbitrary precision number support in gawk.

PROCINFO[“mpfr_version”]
The version of the GNU MPFR library used for arbitrary precision number support in gawk.

PROCINFO[“prec_max”]
The maximum precision supported by the GNU MPFR library for arbitrary precision floating-point numbers.

PROCINFO[“prec_min”]
The minimum precision allowed by the GNU MPFR library for arbitrary precision floating-point numbers.

The following elements may set by a program to change gawk’s behavior:

PROCINFO[“NONFATAL”]
If this exists, then I/O errors for all redirections become nonfatal.

PROCINFO[“name”, “NONFATAL”]
Make I/O errors for name be nonfatal.

PROCINFO[“command”, “pty”]
Use a pseudo-tty for two-way communication with command instead of setting up two one-way pipes.

PROCINFO[“input”, “READ_TIMEOUT”]
The timeout in milliseconds for reading data from input, where input is a redirection string or a filename. A value of zero or less than zero means no timeout.

PROCINFO[“input”, “RETRY”]
If an I/O error that may be retried occurs when reading data from input, and this array entry exists, then getline returns -2 instead of following the default behavior of returning -1 and configuring input to return no further data. An I/O error that may be retried is one where errno(3) has the value EAGAIN, EWOULDBLOCK, EINTR, or ETIMEDOUT. This may be useful in conjunction with
PROCINFO[“input”, “READ_TIMEOUT”] or in situations where a file descriptor has been configured to behave in a non-blocking fashion.

PROCINFO[“sorted_in”]
If this element exists in PROCINFO, then its value controls the order in which array elements are traversed in for loops. Supported values are “@ind_str_asc”, “@ind_num_asc”, “@val_type_asc”, “@val_str_asc”, “@val_num_asc”, “@ind_str_desc”, “@ind_num_desc”, “@val_type_desc”, “@val_str_desc”, “@val_num_desc”, and “@unsorted”. The value can also be the name (as a string) of any compari‐
son function defined as follows:

function cmp_func(i1, v1, i2, v2)

where i1 and i2 are the indices, and v1 and v2 are the corresponding values of the two elements being compared. It should return a number less than, equal to, or greater than 0, depending on how the elements of the array are to be ordered.

ROUNDMODE The rounding mode to use for arbitrary precision arithmetic on numbers, by default “N” (IEEE-754 roundTiesToEven mode). The accepted values are:

“A” or “a”
for rounding away from zero. These are only available if your version of the GNU MPFR library supports rounding away from zero.

“D” or “d” for roundTowardNegative.

“N” or “n” for roundTiesToEven.

“U” or “u” for roundTowardPositive.

“Z” or “z” for roundTowardZero.

RS The input record separator, by default a newline.

RT The record terminator. Gawk sets RT to the input text that matched the character or regular expression specified by RS.

RSTART The index of the first character matched by match(); 0 if no match. (This implies that character indices start at one.)

RLENGTH The length of the string matched by match(); -1 if no match.

SUBSEP The string used to separate multiple subscripts in array elements, by default “\034”.

SYMTAB An array whose indices are the names of all currently defined global variables and arrays in the program. The array may be used for indirect access to read or write the value of a variable:

foo = 5
SYMTAB[“foo”] = 4
print foo # prints 4

The typeof() function may be used to test if an element in SYMTAB is an array. You may not use the delete statement with the SYMTAB array, nor assign to elements with an index that is not a variable name.

TEXTDOMAIN The text domain of the AWK program; used to find the localized translations for the program’s strings.

Arrays
Arrays are subscripted with an expression between square brackets ([ and ]). If the expression is an expression list (expr, expr …) then the array subscript is a string consisting of the concatenation of the (string) value of each expression, separated by the value of the SUBSEP variable. This facility is used to simulate multiply dimensioned arrays. For example:

i = “A”; j = “B”; k = “C”
x[i, j, k] = “hello, world\n”

assigns the string “hello, world\n” to the element of the array x which is indexed by the string “A\034B\034C”. All arrays in AWK are associative, i.e., indexed by string values.

The special operator in may be used to test if an array has an index consisting of a particular value:

if (val in array)
print array[val]

If the array has multiple subscripts, use (i, j) in array.

The in construct may also be used in a for loop to iterate over all the elements of an array. However, the (i, j) in array construct only works in tests, not in for loops.

An element may be deleted from an array using the delete statement. The delete statement may also be used to delete the entire contents of an array, just by specifying the array name without a subscript.

gawk supports true multidimensional arrays. It does not require that such arrays be “rectangular” as in C or C++. For example:

a[1] = 5
a[2][1] = 6
a[2][2] = 7

NOTE: You may need to tell gawk that an array element is really a subarray in order to use it where gawk expects an array (such as in the second argument to split()). You can do this by creating an element in the subarray and then deleting it with the delete statement.

Namespaces
Gawk provides a simple namespace facility to help work around the fact that all variables in AWK are global.

A qualified name consists of a two simple identifiers joined by a double colon (::). The left-hand identifier represents the namespace and the right-hand identifier is the variable within it. All simple (non-qualified) names are considered to be in the “current” namespace; the default namespace is awk. However, simple identifiers consisting solely of uppercase letters are forced into the awk name‐
space, even if the current namespace is different.

You change the current namespace with an @namespace “name” directive.

The standard predefined builtin function names may not be used as namespace names. The names of additional functions provided by gawk may be used as namespace names or as simple identifiers in other namespaces. For more details, see GAWK: Effective AWK Programming.

Variable Typing And Conversion
Variables and fields may be (floating point) numbers, or strings, or both. They may also be regular expressions. How the value of a variable is interpreted depends upon its context. If used in a numeric expression, it will be treated as a number; if used as a string it will be treated as a string.

To force a variable to be treated as a number, add zero to it; to force it to be treated as a string, concatenate it with the null string.

Uninitialized variables have the numeric value zero and the string value “” (the null, or empty, string).

When a string must be converted to a number, the conversion is accomplished using strtod(3). A number is converted to a string by using the value of CONVFMT as a format string for sprintf(3), with the numeric value of the variable as the argument. However, even though all numbers in AWK are floating-point, integral values are always converted as integers. Thus, given

CONVFMT = “%2.2f”
a = 12
b = a “”

the variable b has a string value of “12” and not “12.00”.

NOTE: When operating in POSIX mode (such as with the –posix option), beware that locale settings may interfere with the way decimal numbers are treated: the decimal separator of the numbers you are feeding to gawk must conform to what your locale would expect, be it a comma (,) or a period (.).

Gawk performs comparisons as follows: If two variables are numeric, they are compared numerically. If one value is numeric and the other has a string value that is a “numeric string,” then comparisons are also done numerically. Otherwise, the numeric value is converted to a string and a string comparison is performed. Two strings are compared, of course, as strings.

Note that string constants, such as “57”, are not numeric strings, they are string constants. The idea of “numeric string” only applies to fields, getline input, FILENAME, ARGV elements, ENVIRON elements and the elements of an array created by split() or patsplit() that are numeric strings. The basic idea is that user input, and only user input, that looks numeric, should be treated that way.

Octal and Hexadecimal Constants
You may use C-style octal and hexadecimal constants in your AWK program source code. For example, the octal value 011 is equal to decimal 9, and the hexadecimal value 0x11 is equal to decimal 17.

String Constants
String constants in AWK are sequences of characters enclosed between double quotes (like “value”). Within strings, certain escape sequences are recognized, as in C. These are:

\\ A literal backslash.

\a The “alert” character; usually the ASCII BEL character.

\b Backspace.

\f Form-feed.

\n Newline.

\r Carriage return.

\t Horizontal tab.

\v Vertical tab.

\xhex digits
The character represented by the string of hexadecimal digits following the \x. Up to two following hexadecimal digits are considered part of the escape sequence. E.g., “\x1B” is the ASCII ESC (escape) character.

\ddd The character represented by the 1-, 2-, or 3-digit sequence of octal digits. E.g., “\033” is the ASCII ESC (escape) character.

\c The literal character c.

In compatibility mode, the characters represented by octal and hexadecimal escape sequences are treated literally when used in regular expression constants. Thus, /a\52b/ is equivalent to /a\*b/.

Regexp Constants
A regular expression constant is a sequence of characters enclosed between forward slashes (like /value/). Regular expression matching is described more fully below; see Regular Expressions.

The escape sequences described earlier may also be used inside constant regular expressions (e.g., /[ \t\f\n\r\v]/ matches whitespace characters).

Gawk provides strongly typed regular expression constants. These are written with a leading @ symbol (like so: @/value/). Such constants may be assigned to scalars (variables, array elements) and passed to user-defined functions. Variables that have been so assigned have regular expression type.

PATTERNS AND ACTIONS
AWK is a line-oriented language. The pattern comes first, and then the action. Action statements are enclosed in { and }. Either the pattern may be missing, or the action may be missing, but, of course, not both. If the pattern is missing, the action executes for every single record of input. A missing action is equivalent to

{ print }

which prints the entire record.

Comments begin with the # character, and continue until the end of the line. Empty lines may be used to separate statements. Normally, a statement ends with a newline, however, this is not the case for lines ending in a comma, {, ?, :, &&, or ||. Lines ending in do or else also have their statements automatically continued on the following line. In other cases, a line can be continued by ending it with
a “\”, in which case the newline is ignored. However, a “\” after a # is not special.

Multiple statements may be put on one line by separating them with a “;”. This applies to both the statements within the action part of a pattern-action pair (the usual case), and to the pattern-action statements themselves.

Patterns
AWK patterns may be one of the following:

BEGIN
END
BEGINFILE
ENDFILE
/regular expression/
relational expression
pattern && pattern
pattern || pattern
pattern ? pattern : pattern
(pattern)
! pattern
pattern1, pattern2

BEGIN and END are two special kinds of patterns which are not tested against the input. The action parts of all BEGIN patterns are merged as if all the statements had been written in a single BEGIN rule. They are executed before any of the input is read. Similarly, all the END rules are merged, and executed when all the input is exhausted (or when an exit statement is executed). BEGIN and END patterns
cannot be combined with other patterns in pattern expressions. BEGIN and END patterns cannot have missing action parts.

BEGINFILE and ENDFILE are additional special patterns whose actions are executed before reading the first record of each command-line input file and after reading the last record of each file. Inside the BEGINFILE rule, the value of ERRNO is the empty string if the file was opened successfully. Otherwise, there is some problem with the file and the code should use nextfile to skip it. If that is not
done, gawk produces its usual fatal error for files that cannot be opened.

For /regular expression/ patterns, the associated statement is executed for each input record that matches the regular expression. Regular expressions are the same as those in egrep(1), and are summarized below.

A relational expression may use any of the operators defined below in the section on actions. These generally test whether certain fields match certain regular expressions.

The &&, ||, and ! operators are logical AND, logical OR, and logical NOT, respectively, as in C. They do short-circuit evaluation, also as in C, and are used for combining more primitive pattern expressions. As in most languages, parentheses may be used to change the order of evaluation.

The ?: operator is like the same operator in C. If the first pattern is true then the pattern used for testing is the second pattern, otherwise it is the third. Only one of the second and third patterns is evaluated.

The pattern1, pattern2 form of an expression is called a range pattern. It matches all input records starting with a record that matches pattern1, and continuing until a record that matches pattern2, inclusive. It does not combine with any other sort of pattern expression.

Regular Expressions
Regular expressions are the extended kind found in egrep. They are composed of characters as follows:

c Matches the non-metacharacter c.

\c Matches the literal character c.

. Matches any character including newline.

^ Matches the beginning of a string.

$ Matches the end of a string.

[abc…] A character list: matches any of the characters abc…. You may include a range of characters by separating them with a dash. To include a literal dash in the list, put it first or last.

[^abc…] A negated character list: matches any character except abc….

r1|r2 Alternation: matches either r1 or r2.

r1r2 Concatenation: matches r1, and then r2.

r+ Matches one or more r’s.

r* Matches zero or more r’s.

r? Matches zero or one r’s.

(r) Grouping: matches r.

r{n}
r{n,}
r{n,m} One or two numbers inside braces denote an interval expression. If there is one number in the braces, the preceding regular expression r is repeated n times. If there are two numbers separated by a comma, r is repeated n to m times. If there is one number followed by a comma, then r is repeated at least n times.

\y Matches the empty string at either the beginning or the end of a word.

\B Matches the empty string within a word.

\< Matches the empty string at the beginning of a word. \> Matches the empty string at the end of a word.

\s Matches any whitespace character.

\S Matches any nonwhitespace character.

\w Matches any word-constituent character (letter, digit, or underscore).

\W Matches any character that is not word-constituent.

\` Matches the empty string at the beginning of a buffer (string).

\’ Matches the empty string at the end of a buffer.

The escape sequences that are valid in string constants (see String Constants) are also valid in regular expressions.

Character classes are a feature introduced in the POSIX standard. A character class is a special notation for describing lists of characters that have a specific attribute, but where the actual characters themselves can vary from country to country and/or from character set to character set. For example, the notion of what is an alphabetic character differs in the USA and in France.

A character class is only valid in a regular expression inside the brackets of a character list. Character classes consist of [:, a keyword denoting the class, and :]. The character classes defined by the POSIX standard are:

[:alnum:] Alphanumeric characters.

[:alpha:] Alphabetic characters.

[:blank:] Space or tab characters.

[:cntrl:] Control characters.

[:digit:] Numeric characters.

[:graph:] Characters that are both printable and visible. (A space is printable, but not visible, while an a is both.)

[:lower:] Lowercase alphabetic characters.

[:print:] Printable characters (characters that are not control characters.)

[:punct:] Punctuation characters (characters that are not letter, digits, control characters, or space characters).

[:space:] Space characters (such as space, tab, and formfeed, to name a few).

[:upper:] Uppercase alphabetic characters.

[:xdigit:] Characters that are hexadecimal digits.

For example, before the POSIX standard, to match alphanumeric characters, you would have had to write /[A-Za-z0-9]/. If your character set had other alphabetic characters in it, this would not match them, and if your character set collated differently from ASCII, this might not even match the ASCII alphanumeric characters. With the POSIX character classes, you can write /[[:alnum:]]/, and this matches
the alphabetic and numeric characters in your character set, no matter what it is.

Two additional special sequences can appear in character lists. These apply to non-ASCII character sets, which can have single symbols (called collating elements) that are represented with more than one character, as well as several characters that are equivalent for collating, or sorting, purposes. (E.g., in French, a plain “e” and a grave-accented “`” are equivalent.)

Collating Symbols
A collating symbol is a multi-character collating element enclosed in [. and .]. For example, if ch is a collating element, then [[.ch.]] is a regular expression that matches this collating element, while [ch] is a regular expression that matches either c or h.

Equivalence Classes
An equivalence class is a locale-specific name for a list of characters that are equivalent. The name is enclosed in [= and =]. For example, the name e might be used to represent all of “e”, “´”, and “`”. In this case, [[=e=]] is a regular expression that matches any of e, ´, or `.

These features are very valuable in non-English speaking locales. The library functions that gawk uses for regular expression matching currently only recognize POSIX character classes; they do not recognize collating symbols or equivalence classes.

The \y, \B, \<, \>, \s, \S, \w, \W, \`, and \’ operators are specific to gawk; they are extensions based on facilities in the GNU regular expression libraries.

The various command line options control how gawk interprets characters in regular expressions.

No options
In the default case, gawk provides all the facilities of POSIX regular expressions and the GNU regular expression operators described above.

–posix
Only POSIX regular expressions are supported, the GNU operators are not special. (E.g., \w matches a literal w).

–traditional
Traditional UNIX awk regular expressions are matched. The GNU operators are not special, and interval expressions are not available. Characters described by octal and hexadecimal escape sequences are treated literally, even if they represent regular expression metacharacters.

–re-interval
Allow interval expressions in regular expressions, even if –traditional has been provided.

Actions
Action statements are enclosed in braces, { and }. Action statements consist of the usual assignment, conditional, and looping statements found in most languages. The operators, control statements, and input/output statements available are patterned after those in C.

Operators
The operators in AWK, in order of decreasing precedence, are:

(…) Grouping

$ Field reference.

++ — Increment and decrement, both prefix and postfix.

^ Exponentiation (** may also be used, and **= for the assignment operator).

+ – ! Unary plus, unary minus, and logical negation.

* / % Multiplication, division, and modulus.

+ – Addition and subtraction.

space String concatenation.

| |& Piped I/O for getline, print, and printf.

< > <= >= == !=
The regular relational operators.

~ !~ Regular expression match, negated match. NOTE: Do not use a constant regular expression (/foo/) on the left-hand side of a ~ or !~. Only use one on the right-hand side. The expression /foo/ ~ exp has the same meaning as (($0 ~ /foo/) ~ exp). This is usually not what you want.

in Array membership.

&& Logical AND.

|| Logical OR.

?: The C conditional expression. This has the form expr1 ? expr2 : expr3. If expr1 is true, the value of the expression is expr2, otherwise it is expr3. Only one of expr2 and expr3 is evaluated.

= += -= *= /= %= ^=
Assignment. Both absolute assignment (var = value) and operator-assignment (the other forms) are supported.

Control Statements
The control statements are as follows:

if (condition) statement [ else statement ]
while (condition) statement
do statement while (condition)
for (expr1; expr2; expr3) statement
for (var in array) statement
break
continue
delete array[index]
delete array
exit [ expression ]
{ statements }
switch (expression) {
case value|regex : statement

[ default: statement ]
}

I/O Statements
The input/output statements are as follows:

close(file [, how]) Close file, pipe or coprocess. The optional how should only be used when closing one end of a two-way pipe to a coprocess. It must be a string value, either “to” or “from”.

getline Set $0 from the next input record; set NF, NR, FNR, RT.

getline file Print expressions on file. Each expression is separated by the value of OFS. The output record is terminated with the value of ORS.

printf fmt, expr-list Format and print. See The printf Statement, below.

printf fmt, expr-list >file
Format and print on file.

system(cmd-line) Execute the command cmd-line, and return the exit status. (This may not be available on non-POSIX systems.) See GAWK: Effective AWK Programming for the full details on the exit status.

fflush([file]) Flush any buffers associated with the open output file or pipe file. If file is missing or if it is the null string, then flush all open output files and pipes.

Additional output redirections are allowed for print and printf.

print … >> file
Append output to the file.

print … | command
Write on a pipe.

print … |& command
Send data to a coprocess or socket. (See also the subsection Special File Names, below.)

The getline command returns 1 on success, zero on end of file, and -1 on an error. If the errno(3) value indicates that the I/O operation may be retried, and PROCINFO[“input”, “RETRY”] is set, then -2 is returned instead of -1, and further calls to getline may be attempted. Upon an error, ERRNO is set to a string describing the problem.

NOTE: Failure in opening a two-way socket results in a non-fatal error being returned to the calling function. If using a pipe, coprocess, or socket to getline, or from print or printf within a loop, you must use close() to create new instances of the command or socket. AWK does not automatically close pipes, sockets, or coprocesses when they return EOF.

The printf Statement
The AWK versions of the printf statement and sprintf() function (see below) accept the following conversion specification formats:

%a, %A A floating point number of the form [-]0xh.hhhhp+-dd (C99 hexadecimal floating point format). For %A, uppercase letters are used instead of lowercase ones.

%c A single character. If the argument used for %c is numeric, it is treated as a character and printed. Otherwise, the argument is assumed to be a string, and the only first character of that string is printed.

%d, %i A decimal number (the integer part).

%e, %E A floating point number of the form [-]d.dddddde[+-]dd. The %E format uses E instead of e.

%f, %F A floating point number of the form [-]ddd.dddddd. If the system library supports it, %F is available as well. This is like %f, but uses capital letters for special “not a number” and “infinity” values. If %F is not available, gawk uses %f.

%g, %G Use %e or %f conversion, whichever is shorter, with nonsignificant zeros suppressed. The %G format uses %E instead of %e.

%o An unsigned octal number (also an integer).

%u An unsigned decimal number (again, an integer).

%s A character string.

%x, %X An unsigned hexadecimal number (an integer). The %X format uses ABCDEF instead of abcdef.

%% A single % character; no argument is converted.

Optional, additional parameters may lie between the % and the control letter:

count$ Use the count’th argument at this point in the formatting. This is called a positional specifier and is intended primarily for use in translated versions of format strings, not in the original text of an AWK program. It is a gawk extension.

– The expression should be left-justified within its field.

space For numeric conversions, prefix positive values with a space, and negative values with a minus sign.

+ The plus sign, used before the width modifier (see below), says to always supply a sign for numeric conversions, even if the data to be formatted is positive. The + overrides the space modifier.

# Use an “alternate form” for certain control letters. For %o, supply a leading zero. For %x, and %X, supply a leading 0x or 0X for a nonzero result. For %e, %E, %f and %F, the result always contains a decimal point. For %g, and %G, trailing zeros are not removed from the result.

0 A leading 0 (zero) acts as a flag, indicating that output should be padded with zeroes instead of spaces. This applies only to the numeric output formats. This flag only has an effect when the field width is wider than the value to be printed.

‘ A single quote character instructs gawk to insert the locale’s thousands-separator character into decimal numbers, and to also use the locale’s decimal point character with floating point formats. This requires correct locale support in the C library and in the definition of the current locale.

width The field should be padded to this width. The field is normally padded with spaces. With the 0 flag, it is padded with zeroes.

.prec A number that specifies the precision to use when printing. For the %e, %E, %f and %F, formats, this specifies the number of digits you want printed to the right of the decimal point. For the %g, and %G formats, it specifies the maximum number of significant digits. For the %d, %i, %o, %u, %x, and %X formats, it specifies the minimum number of digits to print. For the %s format, it specifies the
maximum number of characters from the string that should be printed.

The dynamic width and prec capabilities of the ISO C printf() routines are supported. A * in place of either the width or prec specifications causes their values to be taken from the argument list to printf or sprintf(). To use a positional specifier with a dynamic width or precision, supply the count$ after the * in the format string. For example, “%3$*2$.*1$s”.

Special File Names
When doing I/O redirection from either print or printf into a file, or via getline from a file, gawk recognizes certain special filenames internally. These filenames allow access to open file descriptors inherited from gawk’s parent process (usually the shell). These file names may also be used on the command line to name data files. The filenames are:

– The standard input.

/dev/stdin The standard input.

/dev/stdout The standard output.

/dev/stderr The standard error output.

/dev/fd/n The file associated with the open file descriptor n.

These are particularly useful for error messages. For example:

print “You blew it!” > “/dev/stderr”

whereas you would otherwise have to use

print “You blew it!” | “cat 1>&2”

The following special filenames may be used with the |& coprocess operator for creating TCP/IP network connections:

/inet/tcp/lport/rhost/rport
/inet4/tcp/lport/rhost/rport
/inet6/tcp/lport/rhost/rport
Files for a TCP/IP connection on local port lport to remote host rhost on remote port rport. Use a port of 0 to have the system pick a port. Use /inet4 to force an IPv4 connection, and /inet6 to force an IPv6 connection. Plain /inet uses the system default (most likely IPv4). Usable only with the |& two-way I/O operator.

/inet/udp/lport/rhost/rport
/inet4/udp/lport/rhost/rport
/inet6/udp/lport/rhost/rport
Similar, but use UDP/IP instead of TCP/IP.

Numeric Functions
AWK has the following built-in arithmetic functions:

atan2(y, x) Return the arctangent of y/x in radians.

cos(expr) Return the cosine of expr, which is in radians.

exp(expr) The exponential function.

int(expr) Truncate to integer.

log(expr) The natural logarithm function.

rand() Return a random number N, between zero and one, such that 0 ≤ N < 1. sin(expr) Return the sine of expr, which is in radians. sqrt(expr) Return the square root of expr. srand([expr]) Use expr as the new seed for the random number generator. If no expr is provided, use the time of day. Return the previous seed for the random number generator. String Functions Gawk has the following built-in string functions: asort(s [, d [, how] ]) Return the number of elements in the source array s. Sort the contents of s using gawk's normal rules for comparing values, and replace the indices of the sorted values s with sequential integers starting with 1. If the optional destination array d is specified, first duplicate s into d, and then sort d, leaving the indices of the source array s unchanged. The optional string how controls the direction and the comparison mode. Valid values for how are any of the strings valid for PROCINFO["sorted_in"]. It can also be the name of a user-defined comparison function as described in PROCINFO["sorted_in"]. asorti(s [, d [, how] ]) Return the number of elements in the source array s. The behavior is the same as that of asort(), except that the array indices are used for sorting, not the array values. When done, the array is indexed numerically, and the values are those of the original indices. The original values are lost; thus provide a second array if you wish to preserve the original. The purpose of the optional string how is the same as described previously for asort(). gensub(r, s, h [, t]) Search the target string t for matches of the regular expression r. If h is a string beginning with g or G, then replace all matches of r with s. Otherwise, h is a number indicating which match of r to replace. If t is not supplied, use $0 instead. Within the replacement text s, the sequence \n, where n is a digit from 1 to 9, may be used to indicate just the text that matched the n'th parenthesized subexpression. The sequence \0 represents the entire matched text, as does the character &. Unlike sub() and gsub(), the modified string is returned as the result of the function, and the original target string is not changed. gsub(r, s [, t]) For each substring matching the regular expression r in the string t, substitute the string s, and return the number of substitutions. If t is not supplied, use $0. An & in the replacement text is replaced with the text that was actually matched. Use \& to get a literal &. (This must be typed as "\\&"; see GAWK: Effective AWK Programming for a fuller discussion of the rules for ampersands and backslashes in the replacement text of sub(), gsub(), and gensub().) index(s, t) Return the index of the string t in the string s, or zero if t is not present. (This implies that character indices start at one.) It is a fatal error to use a regexp constant for t. length([s]) Return the length of the string s, or the length of $0 if s is not supplied. As a non-standard extension, with an array argument, length() returns the number of elements in the array. match(s, r [, a]) Return the position in s where the regular expression r occurs, or zero if r is not present, and set the values of RSTART and RLENGTH. Note that the argument order is the same as for the ~ operator: str ~ re. If array a is provided, a is cleared and then elements 1 through n are filled with the portions of s that match the corresponding parenthesized subexpression in r. The zero'th element of a contains the portion of s matched by the entire regular expression r. Subscripts a[n, "start"], and a[n, "length"] provide the starting index in the string and length respectively, of each matching substring. patsplit(s, a [, r [, seps] ]) Split the string s into the array a and the separators array seps on the regular expression r, and return the number of fields. Element values are the portions of s that matched r. The value of seps[i] is the possibly null separator that appeared after a[i]. The value of seps[0] is the possibly null leading separator. If r is omitted, FPAT is used instead. The arrays a and seps are cleared first. Splitting behaves identically to field splitting with FPAT, described above. split(s, a [, r [, seps] ]) Split the string s into the array a and the separators array seps on the regular expression r, and return the number of fields. If r is omitted, FS is used instead. The arrays a and seps are cleared first. seps[i] is the field separator matched by r between a[i] and a[i+1]. If r is a single space, then leading whitespace in s goes into the extra array element seps[0] and trail‐ ing whitespace goes into the extra array element seps[n], where n is the return value of split(s, a, r, seps). Splitting behaves identically to field splitting, described above. In particular, if r is a single-character string, that string acts as the separator, even if it happens to be a regular expression metacharacter. sprintf(fmt, expr-list) Print expr-list according to fmt, and return the resulting string. strtonum(str) Examine str, and return its numeric value. If str begins with a leading 0, treat it as an octal number. If str begins with a leading 0x or 0X, treat it as a hexadecimal number. Otherwise, assume it is a decimal number. sub(r, s [, t]) Just like gsub(), but replace only the first matching substring. Return either zero or one. substr(s, i [, n]) Return the at most n-character substring of s starting at i. If n is omitted, use the rest of s. tolower(str) Return a copy of the string str, with all the uppercase characters in str translated to their corresponding lowercase counterparts. Non-alphabetic characters are left unchanged. toupper(str) Return a copy of the string str, with all the lowercase characters in str translated to their corresponding uppercase counterparts. Non-alphabetic characters are left unchanged. Gawk is multibyte aware. This means that index(), length(), substr() and match() all work in terms of characters, not bytes. Time Functions Since one of the primary uses of AWK programs is processing log files that contain time stamp information, gawk provides the following functions for obtaining time stamps and formatting them. mktime(datespec [, utc-flag]) Turn datespec into a time stamp of the same form as returned by systime(), and return the result. The datespec is a string of the form YYYY MM DD HH MM SS[ DST]. The contents of the string are six or seven numbers representing respectively the full year including century, the month from 1 to 12, the day of the month from 1 to 31, the hour of the day from 0 to 23, the minute from 0 to 59, the second from 0 to 60, and an optional daylight saving flag. The values of these numbers need not be within the ranges specified; for example, an hour of -1 means 1 hour before midnight. The origin-zero Gregorian calendar is assumed, with year 0 preceding year 1 and year -1 preceding year 0. If utc-flag is present and is non-zero or non-null, the time is assumed to be in the UTC time zone; oth‐ erwise, the time is assumed to be in the local time zone. If the DST daylight saving flag is positive, the time is assumed to be daylight saving time; if zero, the time is assumed to be standard time; and if negative (the default), mktime() attempts to determine whether daylight saving time is in effect for the specified time. If datespec does not contain enough elements or if the resulting time is out of range, mktime() returns -1. strftime([format [, timestamp[, utc-flag]]]) Format timestamp according to the specification in format. If utc-flag is present and is non-zero or non-null, the result is in UTC, otherwise the result is in local time. The timestamp should be of the same form as returned by systime(). If timestamp is missing, the current time of day is used. If format is missing, a default format equivalent to the output of date(1) is used. The default format is available in PROCINFO["strftime"]. See the specification for the strftime() function in ISO C for the format conversions that are guaranteed to be available. systime() Return the current time of day as the number of seconds since the Epoch (1970-01-01 00:00:00 UTC on POSIX systems). Bit Manipulations Functions Gawk supplies the following bit manipulation functions. They work by converting double-precision floating point values to uintmax_t integers, doing the operation, and then converting the result back to floating point. NOTE: Passing negative operands to any of these functions causes a fatal error. The functions are: and(v1, v2 [, ...]) Return the bitwise AND of the values provided in the argument list. There must be at least two. compl(val) Return the bitwise complement of val. lshift(val, count) Return the value of val, shifted left by count bits. or(v1, v2 [, ...]) Return the bitwise OR of the values provided in the argument list. There must be at least two. rshift(val, count) Return the value of val, shifted right by count bits. xor(v1, v2 [, ...]) Return the bitwise XOR of the values provided in the argument list. There must be at least two. Type Functions The following functions provide type related information about their arguments. isarray(x) Return true if x is an array, false otherwise. This function is mainly for use with the elements of multidimensional arrays and with function parameters. typeof(x) Return a string indicating the type of x. The string will be one of "array", "number", "regexp", "string", "strnum", "unassigned", or "undefined". Internationalization Functions The following functions may be used from within your AWK program for translating strings at run-time. For full details, see GAWK: Effective AWK Programming. bindtextdomain(directory [, domain]) Specify the directory where gawk looks for the .gmo files, in case they will not or cannot be placed in the ``standard'' locations (e.g., during testing). It returns the directory where domain is ``bound.'' The default domain is the value of TEXTDOMAIN. If directory is the null string (""), then bindtextdomain() returns the current binding for the given domain. dcgettext(string [, domain [, category]]) Return the translation of string in text domain domain for locale category category. The default value for domain is the current value of TEXTDOMAIN. The default value for category is "LC_MESSAGES". If you supply a value for category, it must be a string equal to one of the known locale categories described in GAWK: Effective AWK Programming. You must also supply a text domain. Use TEXTDOMAIN if you want to use the current domain. dcngettext(string1, string2, number [, domain [, category]]) Return the plural form used for number of the translation of string1 and string2 in text domain domain for locale category category. The default value for domain is the current value of TEXTDOMAIN. The default value for category is "LC_MESSAGES". If you supply a value for category, it must be a string equal to one of the known locale categories described in GAWK: Effective AWK Programming. You must also supply a text domain. Use TEXTDOMAIN if you want to use the current domain. USER-DEFINED FUNCTIONS Functions in AWK are defined as follows: function name(parameter list) { statements } Functions execute when they are called from within expressions in either patterns or actions. Actual parameters supplied in the function call are used to instantiate the formal parameters declared in the function. Arrays are passed by reference, other variables are passed by value. Since functions were not originally part of the AWK language, the provision for local variables is rather clumsy: They are declared as extra parameters in the parameter list. The convention is to separate local variables from real parameters by extra spaces in the parameter list. For example: function f(p, q, a, b) # a and b are local { ... } /abc/ { ... ; f(1, 2) ; ... } The left parenthesis in a function call is required to immediately follow the function name, without any intervening whitespace. This avoids a syntactic ambiguity with the concatenation operator. This restriction does not apply to the built-in functions listed above. Functions may call each other and may be recursive. Function parameters used as local variables are initialized to the null string and the number zero upon function invocation. Use return expr to return a value from a function. The return value is undefined if no value is provided, or if the function returns by “falling off” the end. As a gawk extension, functions may be called indirectly. To do this, assign the name of the function to be called, as a string, to a variable. Then use the variable as if it were the name of a function, prefixed with an @ sign, like so: function myfunc() { print "myfunc called" ... } { ... the_func = "myfunc" @the_func() # call through the_func to myfunc ... } As of version 4.1.2, this works with user-defined functions, built-in functions, and extension functions. If --lint has been provided, gawk warns about calls to undefined functions at parse time, instead of at run time. Calling an undefined function at run time is a fatal error. The word func may be used in place of function, although this is deprecated. DYNAMICALLY LOADING NEW FUNCTIONS You can dynamically add new functions written in C or C++ to the running gawk interpreter with the @load statement. The full details are beyond the scope of this manual page; see GAWK: Effective AWK Programming. SIGNALS The gawk profiler accepts two signals. SIGUSR1 causes it to dump a profile and function call stack to the profile file, which is either awkprof.out, or whatever file was named with the --profile option. It then continues to run. SIGHUP causes gawk to dump the profile and function call stack and then exit. INTERNATIONALIZATION String constants are sequences of characters enclosed in double quotes. In non-English speaking environments, it is possible to mark strings in the AWK program as requiring translation to the local natural language. Such strings are marked in the AWK program with a leading underscore (“_”). For example, gawk 'BEGIN { print "hello, world" }' always prints hello, world. But, gawk 'BEGIN { print _"hello, world" }' might print bonjour, monde in France. There are several steps involved in producing and running a localizable AWK program. 1. Add a BEGIN action to assign a value to the TEXTDOMAIN variable to set the text domain to a name associated with your program: BEGIN { TEXTDOMAIN = "myprog" } This allows gawk to find the .gmo file associated with your program. Without this step, gawk uses the messages text domain, which likely does not contain translations for your program. 2. Mark all strings that should be translated with leading underscores. 3. If necessary, use the dcgettext() and/or bindtextdomain() functions in your program, as appropriate. 4. Run gawk --gen-pot -f myprog.awk > myprog.pot to generate a .pot file for your program.

5. Provide appropriate translations, and build and install the corresponding .gmo files.

The internationalization features are described in full detail in GAWK: Effective AWK Programming.

POSIX COMPATIBILITY
A primary goal for gawk is compatibility with the POSIX standard, as well as with the latest version of Brian Kernighan’s awk. To this end, gawk incorporates the following user visible features which are not described in the AWK book, but are part of the Brian Kernighan’s version of awk, and are in the POSIX standard.

The book indicates that command line variable assignment happens when awk would otherwise open the argument as a file, which is after the BEGIN rule is executed. However, in earlier implementations, when such an assignment appeared before any file names, the assignment would happen before the BEGIN rule was run. Applications came to depend on this “feature.” When awk was changed to match its documenta‐
tion, the -v option for assigning variables before program execution was added to accommodate applications that depended upon the old behavior. (This feature was agreed upon by both the Bell Laboratories developers and the GNU developers.)

When processing arguments, gawk uses the special option “–” to signal the end of arguments. In compatibility mode, it warns about but otherwise ignores undefined options. In normal operation, such arguments are passed on to the AWK program for it to process.

The AWK book does not define the return value of srand(). The POSIX standard has it return the seed it was using, to allow keeping track of random number sequences. Therefore srand() in gawk also returns its current seed.

Other features are: The use of multiple -f options (from MKS awk); the ENVIRON array; the \a, and \v escape sequences (done originally in gawk and fed back into the Bell Laboratories version); the tolower() and toupper() built-in functions (from the Bell Laboratories version); and the ISO C conversion specifications in printf (done first in the Bell Laboratories version).

HISTORICAL FEATURES
There is one feature of historical AWK implementations that gawk supports: It is possible to call the length() built-in function not only with no argument, but even without parentheses! Thus,

a = length # Holy Algol 60, Batman!

is the same as either of

a = length()
a = length($0)

Using this feature is poor practice, and gawk issues a warning about its use if –lint is specified on the command line.

GNU EXTENSIONS
Gawk has a too-large number of extensions to POSIX awk. They are described in this section. All the extensions described here can be disabled by invoking gawk with the –traditional or –posix options.

The following features of gawk are not available in POSIX awk.

• No path search is performed for files named via the -f option. Therefore the AWKPATH environment variable is not special.

• There is no facility for doing file inclusion (gawk’s @include mechanism).

• There is no facility for dynamically adding new functions written in C (gawk’s @load mechanism).

• The \x escape sequence.

• The ability to continue lines after ? and :.

• Octal and hexadecimal constants in AWK programs.

• The ARGIND, BINMODE, ERRNO, LINT, PREC, ROUNDMODE, RT and TEXTDOMAIN variables are not special.

• The IGNORECASE variable and its side-effects are not available.

• The FIELDWIDTHS variable and fixed-width field splitting.

• The FPAT variable and field splitting based on field values.

• The FUNCTAB, SYMTAB, and PROCINFO arrays are not available.

• The use of RS as a regular expression.

• The special file names available for I/O redirection are not recognized.

• The |& operator for creating coprocesses.

• The BEGINFILE and ENDFILE special patterns are not available.

• The ability to split out individual characters using the null string as the value of FS, and as the third argument to split().

• An optional fourth argument to split() to receive the separator texts.

• The optional second argument to the close() function.

• The optional third argument to the match() function.

• The ability to use positional specifiers with printf and sprintf().

• The ability to pass an array to length().

• The and(), asort(), asorti(), bindtextdomain(), compl(), dcgettext(), dcngettext(), gensub(), lshift(), mktime(), or(), patsplit(), rshift(), strftime(), strtonum(), systime() and xor() functions.

• Localizable strings.

• Non-fatal I/O.

• Retryable I/O.

The AWK book does not define the return value of the close() function. Gawk’s close() returns the value from fclose(3), or pclose(3), when closing an output file or pipe, respectively. It returns the process’s exit status when closing an input pipe. The return value is -1 if the named file, pipe or coprocess was not opened with a redirection.

When gawk is invoked with the –traditional option, if the fs argument to the -F option is “t”, then FS is set to the tab character. Note that typing gawk -F\t … simply causes the shell to quote the “t,” and does not pass “\t” to the -F option. Since this is a rather ugly special case, it is not the default behavior. This behavior also does not occur if –posix has been specified. To really get a
tab character as the field separator, it is best to use single quotes: gawk -F’\t’ ….

ENVIRONMENT VARIABLES
The AWKPATH environment variable can be used to provide a list of directories that gawk searches when looking for files named via the -f, –file, -i and –include options, and the @include directive. If the initial search fails, the path is searched again after appending .awk to the filename.

The AWKLIBPATH environment variable can be used to provide a list of directories that gawk searches when looking for files named via the -l and –load options.

The GAWK_READ_TIMEOUT environment variable can be used to specify a timeout in milliseconds for reading input from a terminal, pipe or two-way communication including sockets.

For connection to a remote host via socket, GAWK_SOCK_RETRIES controls the number of retries, and GAWK_MSEC_SLEEP the interval between retries. The interval is in milliseconds. On systems that do not support usleep(3), the value is rounded up to an integral number of seconds.

If POSIXLY_CORRECT exists in the environment, then gawk behaves exactly as if –posix had been specified on the command line. If –lint has been specified, gawk issues a warning message to this effect.

EXIT STATUS
If the exit statement is used with a value, then gawk exits with the numeric value given to it.

Otherwise, if there were no problems during execution, gawk exits with the value of the C constant EXIT_SUCCESS. This is usually zero.

If an error occurs, gawk exits with the value of the C constant EXIT_FAILURE. This is usually one.

If gawk exits because of a fatal error, the exit status is 2. On non-POSIX systems, this value may be mapped to EXIT_FAILURE.

VERSION INFORMATION
This man page documents gawk, version 5.1.

AUTHORS
The original version of UNIX awk was designed and implemented by Alfred Aho, Peter Weinberger, and Brian Kernighan of Bell Laboratories. Brian Kernighan continues to maintain and enhance it.

Paul Rubin and Jay Fenlason, of the Free Software Foundation, wrote gawk, to be compatible with the original version of awk distributed in Seventh Edition UNIX. John Woods contributed a number of bug fixes. David Trueman, with contributions from Arnold Robbins, made gawk compatible with the new version of UNIX awk. Arnold Robbins is the current maintainer.

See GAWK: Effective AWK Programming for a full list of the contributors to gawk and its documentation.

See the README file in the gawk distribution for up-to-date information about maintainers and which ports are currently supported.

BUG REPORTS
If you find a bug in gawk, please send electronic mail to bug-gawk@gnu.org. Please include your operating system and its revision, the version of gawk (from gawk –version), which C compiler you used to compile it, and a test program and data that are as small as possible for reproducing the problem.

Before sending a bug report, please do the following things. First, verify that you have the latest version of gawk. Many bugs (usually subtle ones) are fixed at each release, and if yours is out of date, the problem may already have been solved. Second, please see if setting the environment variable LC_ALL to LC_ALL=C causes things to behave as you expect. If so, it’s a locale issue, and may or may not
really be a bug. Finally, please read this man page and the reference manual carefully to be sure that what you think is a bug really is, instead of just a quirk in the language.

Whatever you do, do NOT post a bug report in comp.lang.awk. While the gawk developers occasionally read this newsgroup, posting bug reports there is an unreliable way to report bugs. Similarly, do NOT use a web forum (such as Stack Overflow) for reporting bugs. Instead, please use the electronic mail addresses given above. Really.

If you’re using a GNU/Linux or BSD-based system, you may wish to submit a bug report to the vendor of your distribution. That’s fine, but please send a copy to the official email address as well, since there’s no guarantee that the bug report will be forwarded to the gawk maintainer.

BUGS
The -F option is not necessary given the command line variable assignment feature; it remains only for backwards compatibility.

SEE ALSO
egrep(1), sed(1), getpid(2), getppid(2), getpgrp(2), getuid(2), geteuid(2), getgid(2), getegid(2), getgroups(2), printf(3), strftime(3), usleep(3)

The AWK Programming Language, Alfred V. Aho, Brian W. Kernighan, Peter J. Weinberger, Addison-Wesley, 1988. ISBN 0-201-07981-X.

GAWK: Effective AWK Programming, Edition 5.1, shipped with the gawk source. The current version of this document is available online at https://www.gnu.org/software/gawk/manual.

The GNU gettext documentation, available online at https://www.gnu.org/software/gettext.

EXAMPLES
Print and sort the login names of all users:

BEGIN { FS = “:” }
{ print $1 | “sort” }

Count lines in a file:

{ nlines++ }
END { print nlines }

Precede each line by its number in the file:

{ print FNR, $0 }

Concatenate and line number (a variation on a theme):

{ print NR, $0 }

Run an external command for particular lines of data:

tail -f access_log |
awk ‘/myhome.html/ { system(“nmap ” $1 “>> logdir/myhome.html”) }’

ACKNOWLEDGEMENTS
Brian Kernighan provided valuable assistance during testing and debugging. We thank him.

COPYING PERMISSIONS
Copyright © 1989, 1991, 1992, 1993, 1994, 1995, 1996, 1997, 1998, 1999, 2001, 2002, 2003, 2004, 2005, 2007, 2009, 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020, Free Software Foundation, Inc.

Permission is granted to make and distribute verbatim copies of this manual page provided the copyright notice and this permission notice are preserved on all copies.

Permission is granted to copy and distribute modified versions of this manual page under the conditions for verbatim copying, provided that the entire resulting derived work is distributed under the terms of a permission notice identical to this one.

Permission is granted to copy and distribute translations of this manual page into another language, under the above conditions for modified versions, except that this permission notice may be stated in a translation approved by the Foundation.

Free Software Foundation

Categories
Linux manpage

manpage date

DATE(1) User Commands DATE(1)

NAME

date – print or set the system date and time

SYNOPSIS

       date [OPTION]... [+FORMAT]
       date [-u|--utc|--universal] [MMDDhhmm[[CC]YY][.ss]]

DESCRIPTION

Display the current time in the given FORMAT, or set the system date.

Mandatory arguments to long options are mandatory for short options too.

       -d, --date=STRING
              display time described by STRING, not 'now'

       --debug
              annotate the parsed date, and warn about questionable usage to stderr

       -f, --file=DATEFILE
              like --date; once for each line of DATEFILE

       -I[FMT], --iso-8601[=FMT]
              output  date/time in ISO 8601 format.  FMT='date' for date only (the default), 'hours', 'minutes', 'seconds', or 'ns' for date and time to
              the indicated precision.  Example: 2006-08-14T02:34:56-06:00

       -R, --rfc-email
              output date and time in RFC 5322 format.  Example: Mon, 14 Aug 2006 02:34:56 -0600

       --rfc-3339=FMT
              output date/time in RFC 3339 format.  FMT='date', 'seconds', or 'ns' for date and time to the indicated  precision.   Example:  2006-08-14
              02:34:56-06:00

       -r, --reference=FILE
              display the last modification time of FILE

       -s, --set=STRING
              set time described by STRING

       -u, --utc, --universal
              print or set Coordinated Universal Time (UTC)

       --help display this help and exit

       --version
              output version information and exit

       FORMAT controls the output.  Interpreted sequences are:

       %%     a literal %

       %a     locale's abbreviated weekday name (e.g., Sun)

       %A     locale's full weekday name (e.g., Sunday)

       %b     locale's abbreviated month name (e.g., Jan)

       %B     locale's full month name (e.g., January)

       %c     locale's date and time (e.g., Thu Mar  3 23:05:25 2005)

       %C     century; like %Y, except omit last two digits (e.g., 20)

       %d     day of month (e.g., 01)

       %D     date; same as %m/%d/%y

       %e     day of month, space padded; same as %_d

       %F     full date; like %+4Y-%m-%d

       %g     last two digits of year of ISO week number (see %G)

       %G     year of ISO week number (see %V); normally useful only with %V

       %h     same as %b

       %H     hour (00..23)

       %I     hour (01..12)

       %j     day of year (001..366)

       %k     hour, space padded ( 0..23); same as %_H

       %l     hour, space padded ( 1..12); same as %_I

       %m     month (01..12)

       %M     minute (00..59)

       %n     a newline

       %N     nanoseconds (000000000..999999999)

       %p     locale's equivalent of either AM or PM; blank if not known

       %P     like %p, but lower case

       %q     quarter of year (1..4)

       %r     locale's 12-hour clock time (e.g., 11:11:04 PM)

       %R     24-hour hour and minute; same as %H:%M

       %s     seconds since 1970-01-01 00:00:00 UTC

       %S     second (00..60)

       %t     a tab

       %T     time; same as %H:%M:%S

       %u     day of week (1..7); 1 is Monday

       %U     week number of year, with Sunday as first day of week (00..53)

       %V     ISO week number, with Monday as first day of week (01..53)

       %w     day of week (0..6); 0 is Sunday

       %W     week number of year, with Monday as first day of week (00..53)

       %x     locale's date representation (e.g., 12/31/99)

       %X     locale's time representation (e.g., 23:13:48)

       %y     last two digits of year (00..99)

       %Y     year

       %z     +hhmm numeric time zone (e.g., -0400)

       %:z    +hh:mm numeric time zone (e.g., -04:00)

       %::z   +hh:mm:ss numeric time zone (e.g., -04:00:00)

       %:::z  numeric time zone with : to necessary precision (e.g., -04, +05:30)

       %Z     alphabetic time zone abbreviation (e.g., EDT)

       By default, date pads numeric fields with zeroes.  The following optional flags may follow '%':

       -      (hyphen) do not pad the field

       _      (underscore) pad with spaces

       0      (zero) pad with zeros

       +      pad with zeros, and put '+' before future years with >4 digits

       ^      use upper case if possible

       #      use opposite case if possible

       After  any  flags  comes an optional field width, as a decimal number; then an optional modifier, which is either E to use the locale's alternate
       representations if available, or O to use the locale's alternate numeric symbols if available.

EXAMPLES

Convert seconds since the epoch (1970-01-01 UTC) to a date

$ date –date=’@2147483647′

Show the time on the west coast of the US (use tzselect(1) to find TZ)

$ TZ=’America/Los_Angeles’ date

Show the local time for 9AM next Friday on the west coast of the US

$ date –date=’TZ=”America/Los_Angeles” 09:00 next Fri’

DATE STRING

       The --date=STRING is a mostly free format human readable date string such as "Sun, 29 Feb 2004 16:21:42 -0800" or "2004-02-29 16:21:42"  or  even
       "next  Thursday".   A  date string may contain items indicating calendar date, time of day, time zone, day of week, relative time, relative date,
       and numbers.  An empty string indicates the beginning of the day.  The date string format is more complex than is easily documented here  but  is
       fully described in the info documentation.

AUTHOR

Written by David MacKenzie.

REPORTING BUGS

       GNU coreutils online help: <https://www.gnu.org/software/coreutils/>
       Report any translation bugs to <https://translationproject.org/team/>

COPYRIGHT

       Copyright © 2020 Free Software Foundation, Inc.  License GPLv3+: GNU GPL version 3 or later <https://gnu.org/licenses/gpl.html>.
       This is free software: you are free to change and redistribute it.  There is NO WARRANTY, to the extent permitted by law.

SEE ALSO

       Full documentation <https://www.gnu.org/software/coreutils/date>
       or available locally via: info '(coreutils) date invocation'

GNU coreutils 8.32                                                   September 2020                                                              DATE(1)