linkchecker [options] [file-or-url]…
recursive and multithreaded checking
output in colored or normal text, HTML, SQL, CSV, XML or a sitemap
graph in different formats
support for HTTP/1.1, HTTPS, FTP, mailto:, news:, nntp:, Telnet and
local file links
restriction of link checking with URL filters
username/password authorization for HTTP, FTP and Telnet
support for robots.txt exclusion protocol
support for Cookies
support for HTML5
a command line and web interface
The most common use checks the given domain recursively:
$ linkchecker http://www.example.com/
Beware that this checks the whole site which can have thousands of
URLs. Use the
-r option to restrict the recursion depth.
Don’t check URLs with /secret in its name. All other links are
checked as usual:
$ linkchecker --ignore-url=/secret mysite.example.com
Checking a local HTML file on Unix:
$ linkchecker ../bla.html
Checking a local HTML file on Windows:
C:\> linkchecker c:empest.html
You can skip the http:// url part if the domain starts with
$ linkchecker www.example.com
You can skip the ftp:// url part if the domain starts with ftp.:
$ linkchecker -r0 ftp.example.com
Generate a sitemap graph and convert it with the graphviz dot utility:
$ linkchecker -odot -v www.example.com | dot -Tps > sitemap.ps
-f FILENAME, --config=FILENAME
Use FILENAME as configuration file. By default LinkChecker uses
Help me! Print usage information for this program.
-t NUMBER, --threads=NUMBER
Generate no more than the given number of threads. Default number of
threads is 10. To disable threading specify a non-positive number.
Print version and exit.
Print available check plugins and exit.
URL checking results
-F TYPE[/ENCODING][/FILENAME], --file-output=TYPE[/ENCODING][/FILENAME]
Output to a file linkchecker-out.TYPE,
$XDG_DATA_HOME/linkchecker/failures for the failures output type, or
FILENAME if specified. The ENCODING specifies the output
encoding, the default is that of your locale. Valid encodings are
The FILENAME and ENCODING parts of the none output type will
be ignored, else if the file already exists, it will be overwritten.
You can specify this option more than once. Valid file output TYPEs
are text, html, sql, csv, gml, dot, xml,
sitemap, none or failures. Default is no file output.
The various output types are documented below. Note that you can
suppress all console output with the option
Don’t log warnings. Default is to log warnings.
-o TYPE[/ENCODING], --output=TYPE[/ENCODING]
Specify the console output type as text, html, sql, csv,
gml, dot, xml, sitemap, none or failures.
Default type is text. The various output types are documented below.
The ENCODING specifies the output encoding, the default is that of
your locale. Valid encodings are listed at
Log all checked URLs. Default is to log only errors and warnings.
Do not print URL check status messages.
-D STRING, --debug=STRING
Print debugging output for the given logger.
Available debug loggers are cmdline, checking, cache, plugin and all.
all is an alias for all available loggers.
This option can be given multiple times to debug with more than one logger.
Quiet operation, an alias for
-o none that also hides
application information messages.
This is only useful with
-F, else no results will be output.
Use initial cookie data read from a file. The cookie data format is
Check external URLs.
URLs matching the given regular expression will only be syntax checked.
This option can be given multiple times.
See section REGULAR EXPRESSIONS for more info.
-N STRING, --nntp-server=STRING
Specify an NNTP server for news: links. Default is the
NNTP_SERVER. If no host is given, only the
syntax of the link is checked.
Check but do not recurse into URLs matching the given regular
This option can be given multiple times.
See section REGULAR EXPRESSIONS for more info.
Check URLs regardless of any robots.txt files.
Read a password from console and use it for HTTP and FTP
authorization. For FTP the default password is anonymous@. For
HTTP there is no default password. See also
-r NUMBER, --recursion-level=NUMBER
Check recursively all links up to given depth. A negative depth will
enable infinite recursion. Default depth is infinite.
Set the timeout for connection attempts in seconds. The default
timeout is 60 seconds.
-u STRING, --user=STRING
Try the given username for HTTP and FTP authorization. For FTP the
default username is anonymous. For HTTP there is no default
username. See also
Specify the User-Agent string to send to the HTTP server, for
example “Mozilla/4.0”. The default is “LinkChecker/X.Y” where X.Y is
the current version of LinkChecker.
Read from stdin a list of white-space separated URLs to check.
The location to start checking with.
A file can be a simple list of URLs, one per line, if the first line is
“# LinkChecker URL list”.
Configuration files can specify all options above. They can also specify
some options that cannot be set on the command line. See
linkcheckerrc(5) for more info.
Note that by default only errors and warnings are logged. You should use
--verbose to get the complete URL list, especially when
outputting a sitemap graph format.
Standard text logger, logging URLs in keyword: argument fashion.
Log URLs in keyword: argument fashion, formatted as HTML.
Additionally has links to the referenced pages. Invalid URLs have
HTML and CSS syntax check links appended.
Log check result in CSV format with one URL per line.
Log parent-child relations between linked URLs as a GML sitemap
Log parent-child relations between linked URLs as a DOT sitemap
Log check result as a GraphXML sitemap graph.
Log check result as machine-readable XML.
Log check result as an XML sitemap whose protocol is documented at
Log check result as SQL script with INSERT commands. An example
script to create the initial SQL table is included as create.sql.
Suitable for cron jobs. Logs the check result into a file
$XDG_DATA_HOME/linkchecker/failures which only contains entries with
invalid URLs and the number of times they have failed.
Logs nothing. Suitable for debugging or checking the exit code.
LinkChecker accepts Python regular expressions. See
https://docs.python.org/howto/regex.html for an introduction.
An addition is that a leading exclamation mark negates the regular
To use a proxy on Unix or Windows set the
https_proxy environment variables to the proxy URL. The URL should be
of the form
LinkChecker also detects manual proxy settings of Internet Explorer
under Windows systems. On a Mac use
the Internet Config to select a proxy.
You can also set a comma-separated domain list in the
environment variable to ignore any proxy settings for these domains.
curl_ca_bundle environment variable can be used to identify an
alternative certificate bundle to be used with an HTTPS proxy.
Setting a HTTP proxy on Unix for example looks like this:
$ export http_proxy="http://proxy.example.com:8080"
Proxy authentication is also supported:
$ export http_proxy="http://user1:firstname.lastname@example.org:8081"
Setting a proxy on the Windows command prompt:
C:\> set http_proxy=http://proxy.example.com:8080
All URLs have to pass a preliminary syntax test. Minor quoting mistakes
will issue a warning, all other invalid syntax issues are errors. After
the syntax check passes, the URL is queued for connection checking. All
connection check types are described below.
- HTTP links (http:, https:)
After connecting to the given HTTP server the given path or query is
requested. All redirections are followed, and if user/password is
given it will be used as authorization when necessary. All final
HTTP status codes other than 2xx are errors.
HTML page contents are checked for recursion.
- Local files (file:)
A regular, readable file that can be opened is valid. A readable
directory is also valid. All other files, for example device files,
unreadable or non-existing files are errors.
HTML or other parseable file contents are checked for recursion.
- Mail links (mailto:)
A mailto: link eventually resolves to a list of email addresses.
If one address fails, the whole list will fail. For each mail
address we check the following things:
Check the address syntax, both the parts before and after the
Look up the MX DNS records. If we found no MX record, print an
Check if one of the mail hosts accept an SMTP connection. Check
hosts with higher priority first. If no host accepts SMTP, we
print a warning.
Try to verify the address with the VRFY command. If we got an
answer, print the verified address as an info.
- FTP links (ftp:)
For FTP links we do:
connect to the specified host
try to login with the given user and password. The default user
is anonymous, the default password is anonymous@.
try to change to the given directory
list the file with the NLST command
- Telnet links (telnet:)
We try to connect and if user/password are given, login to the given
- NNTP links (news:, snews:, nntp)
We try to connect to the given NNTP server. If a news group or
article is specified, try to request it from the server.
An unsupported link will only print a warning. No further checking
will be made.
The complete list of recognized, but unsupported links can be found
Sitemaps are parsed for links to check and can be detected either from a
sitemap entry in a robots.txt, or when passed as a
argument in which case detection requires the urlset/sitemapindex tag to be
within the first 70 characters of the sitemap.
Compressed sitemap files are not supported.
There are two plugin types: connection and content plugins. Connection
plugins are run after a successful connection to the URL host. Content
plugins are run if the URL type has content (mailto: URLs have no
content for example) and if the check is not forbidden (ie. by HTTP
Use the option
--list-plugins for a list of plugins and their
documentation. All plugins are enabled via the linkcheckerrc(5)
Before descending recursively into a URL, it has to fulfill several
conditions. They are checked in this order:
A URL must be valid.
A URL must be parseable. This currently includes HTML files, Opera
bookmarks files, and directories. If a file type cannot be determined
(for example it does not have a common HTML file extension, and the
content does not look like HTML), it is assumed to be non-parseable.
The URL content must be retrievable. This is usually the case except
for example mailto: or unknown URL types.
The maximum recursion level must not be exceeded. It is configured
--recursion-level option and is unlimited per default.
It must not match the ignored URL list. This is controlled with the
The Robots Exclusion Protocol must allow links in the URL to be
followed recursively. This is checked by searching for a “nofollow”
directive in the HTML header data.
Note that the directory recursion reads all files in that directory, not
just a subset like index.htm.
URLs on the commandline starting with ftp. are treated like
ftp://ftp., URLs starting with www. are treated like
http://www.. You can also give local files as arguments.
If you have your system configured to automatically establish a
connection to the internet (e.g. with diald), it will connect when
checking links not pointing to your local host. Use the
option to prevent this.
If your platform does not support threading, LinkChecker disables it
You can supply multiple user/password pairs in a configuration file.
When checking news: links the given NNTP host doesn’t need to be the
same as the host of the user browsing your pages.
specifies default NNTP server
specifies default HTTP proxy server
specifies default HTTPS proxy server
an alternative certificate bundle to be used with an HTTPS proxy
comma-separated list of domains to not contact over a proxy server
LC_MESSAGES, LANG, LANGUAGE
specify output language
The return value is 2 when
The return value is 1 when
Else the return value is zero.
LinkChecker consumes memory for each queued URL to check. With thousands
of queued URLs the amount of consumed memory can become quite large.
This might slow down the program or even the whole system.
$XDG_CONFIG_HOME/linkchecker/linkcheckerrc - default configuration file
$XDG_DATA_HOME/linkchecker/failures - default failures logger output filename
linkchecker-out.TYPE - default logger file output name