When search engines comes to your website to crawl a pages, they first check for a robots.txt file at the domain root. If found, then they read the file’s list of directories and files, if any page of directory are blocked from crawling. This file can be created with with our Robots.txt Generator. When you use a robots.txt generator Google and other search engines can check your website and figure out which pages on your site should be excluded. In other words, the file created by a Robots.txt Generator is like the opposite of a sitemap, which indicates which pages to include.