Robots Exclusion Standard

The Robots Exclusion Standard is a method by which webmasters can specify which parts of their site they don't want robots to scan, index, or retrieve. This is done with a file named robots.txt in the root directory of their site. Well-behaved robots look at this file before proceeding to take action regarding a site (which results in web access logs showing attempted accesses for this filename even if no such file exists). Less-well-behaved robots such as spambots and malware don't heed this file (which is just a voluntary standard with no means of enforcing it), so its use is limited to giving instruction to the reasonable robots such as Googlebot. It does, however, cause the effective "retroactive" removal of a site from the Internet Archive Wayback Machine, since it will refuse to display pages (even ones that have been captured in its archive in past scans) in domains/directories that are currently excluded from robots via a robots.txt file.

The file format is plain text, probably in ASCII (though the standard does not specify a character encoding). The standard specifically allows any of the common line break conventions (CR, LF, or CR+LF). Everything following a # character on a line is considered a comment, as is the space (if any) preceding the character. Lines with nothing but a comment are ignored, so don't count as blank lines for the purpose of ending a section of the file.

The only standard commands in the file are "User-Agent" and "Disallow" (commands start at the beginning of a line, and are followed with a colon and then their parameter value). Several nonstandard commands are also sometimes used.

One such extended command is "sitemap", which can be used to specify the location of a sitemap formatted in accordance with the sitemap standard. The value of this parameter is the URL of the sitemap, which can be in the same or a different domain from the site.

To keep robots out of your cgi-bin directory you can use:

User-agent: * Disallow: /cgi-bin/

The asterisk means it applies to all user agents. It's also possible to identify specific robots by their user-agent strings and exclude them from things without affecting others. A user-agent line applies to all following Disallow commands until a blank line is reached.

There are some meta tags like "noindex" and "nofollow" that can be used in HTML for related effects.

Standards

 * "Official" site (actually a non-binding consensus, not a formal standard)

Sample Files

 * Google's robots.txt
 * And they also have a killer-robots.txt!
 * IBM's robots.txt

Utilities

 * Google's robots.txt parser, now open-source
 * Perl robots.txt parser

Specific search engine / robot policies

 * Google / Googlebot
 * Yahoo / Slurp
 * Bing / Bingbot

Other links and references

 * Robots Exclusion Standard (Wikipedia)
 * Robots.txt generator/tutorial
 * ROBOTS.TXT DISALLOW: 20 Years of Mistakes To Avoid
 * Robots.txt article from the Archive Team website
 * Google's robots.txt Parser is Now Open Source