Website development and design blog, tutorials and inspiration

A Guide to the Robots.txt Exclusion Protocol

Attack of the spider bots! How to use robots.txt to limit access to search engine crawlers, also known as spiders

By , Written on in Internet

A Guide to the Robots.txt Exclusion Protocol

555 words, estimated reading time 3 minutes.

Learn how to use robots.txt to control how search engine crawlers, or spiders, access and crawl your site on the web.
 
Search Engine Optimisation Series
  1. SEO - Search Engine Optimization
  2. A Guide to the Robots.txt Exclusion Protocol
  3. What are XML sitemaps?
  4. Using Google Webmaster Tools
  5. Getting Started with Google Analytics
  6. Getting Started Earning Money with Adsense
  7. Google Trends Keyword Comparison Tool
  8. 8 Excellent (and Free!) Search Engine Optimization Websites

A Web crawler is sometimes also called a spider or a bot. They are an automated Internet robot which systematically browses the World Wide Web, typically for the purpose of Web indexing, although they can be used for gathering data of any kind. Sometimes these bots can be a bit overzealous in their crawling and generate thousands of hits per hour, a process also known as a denial of service attack.

The robots.txt file is a standard to give instructions about a website to the web robots. This standard is called ,a href="http://www.robotstxt.org/orig.html">The Robots Exclusion Protocol.

When a robot wants to visit a site to crawl, it firsts checks for the robots.txt to see if is allowed to crawl the site and if there are any areas it should ignore.

The robots.txt is a plain text file which is placed in the root of the website, for example, http://www.example.com/robots.txt

The most basic of robots.txt contents looks like this:

User-agent: *
Disallow:

This simple content creates a rule which allows all web crawlers access to the entire site.

User-agent: * indicates that the following rule applies to all spider bots.
Disallow: The empty disallow field indicates nothing is blocked and every link can be crawled.

The opposite would be to block access for all web crawlers and prevent the site from being indexed.

User-agent: *
Disallow: /

User-agent: * indicates that the following rule applies to all spider bots.
Disallow: / indicates that the homepage, and everything under the homepage, is disallowed, or forbidden.

You can specify which URLs are blocked in the Disallow field.

User-agent: *
Disallow: /wp-admin/
Disallow: /private/

Care must be taken when disallowing resources as malicious users may use the robots.txt to locate hidden areas and target them to attack. For example, the above rules would indicate to a hacker that the site is running WordPress and that there is a URL with private resources. They can then tailor an attack to WordPress or investigate those private resources. Read more about internal information disclosure here.

Robots.txt is used to prevent search engines from listing a web page, it should not be used as a security measure.

You can also limit access on a per-bot basis by specifying them in the User-agent field.

Here are a few of the most popular web crawler user agents to use:

  • googlebot - Googles own web crawler
  • Mediapartners-Google - Google Adsense/Adwords
  • Bingbot - Microsoft Bing
  • MSNBot - Microsoft's old MSN bot
  • Slurp - Yahoo! Search
  • ia_archiver - Internet Archive

You can give access to certain bots, wilst blocking all others:

User-agent: Googlebot
Disallow:
 
User-agent: Bingbot
Disallow:
 
User-agent: Slurp
Disallow:
 
# Everyone Else (NOT allowed)
User-agent: *
Disallow: / 

In theory, this should block all "bad bots", i.e. those bots who scrape content and hog bandwidth, but bad bots do not honour the robots.txt rules or even access the file. Spiders that are bad are not going to follow robots.txt, and search engines and spiders that do follow it are the ones that you want indexing your site.

Once you have created and uploaded a robots.txt, you can use the Google Webmaster Tools to check for errors and test to see if the rules work against a number of user agents.

Last updated on: Saturday 1st July 2017

Further Reading
Comments

There are no comments for this post. Be the first!

Leave a Reply

Your email address will not be published.