Tell search engines what to crawl

Day 2 Tell search engines what to crawl

How to Increase Your Traffic With SEO in 30 Days


Tell search engines what to crawl

The robots.txt is a text file that tells search engine crawlers which directories to crawl (allow) and which not to crawl (disallow).

Every bot must first access the robots.txt file before crawling the website.

Using the robots.txt file helps you ensure that search engines identify all the important content on your website. If an important website or JavaScript elements are excluded from the crawling, search engines will not be able to correctly index your website.

Below is the simplest form of robots.txt:

User-agent: *

In this case, the instructions apply to all bots (*). There are no crawling restrictions. After creating the robots. txt file, you should save it in the root directory of your website.

If you do not want a specific area of the website to be crawled, you should specify this using a “disallow” in the file.

User-agent: *

Disallow: /thisdirectory

Crawling tips:

• Use a robots.txt file to give instructions to search engines.

• Make sure that important areas of your website are not excluded from crawling.

• Regularly check the robots.txt file and its accessibility.

A great tool to assist, Google search console

Tell search engines what to crawl

Tell search engines what to crawl

Want to know more? Give us a call HERE