# $Id: robots.txt, 2010/03/10 # # robots.txt # # This file is to prevent the crawling and indexing of certain parts # of your site by web crawlers and spiders run by sites like Yahoo! # and Google. By telling these "robots" where not to go on your site, # you save bandwidth and server resources. # # This file will be ignored unless it is at the root of your host: # Used: http://example.com/robots.txt # Ignored: http://example.com/site/robots.txt # # For more information about the robots.txt standard, see: # http://www.robotstxt.org/wc/robots.html # # For syntax checking, see: # http://www.sxw.org.uk/computing/robots/check.html # # Google Masters Help # http://www.google.com/support/webmasters/bin/answer.py?hl=en&answer=156449 User-agent: * Crawl-delay: 10 # Directories Disallow: /App_Data/ Disallow: /App_LocalResources/ Disallow: /App_GlobalResources/ Disallow: /bin/ Disallow: /Controls/ Disallow: /WebServices/ Disallow: /adm/ # Files Disallow: /ScriptResource.axd Disallow: /404Handler.aspx Disallow: /Global.asax