Day 2 Tell search engines what to crawl
How to Increase Your Traffic With SEO in 30 Days
Tell search engines what to crawl
The robots.txt is a text file that tells search engine crawlers which directories to crawl (allow) and which not to crawl (disallow).
Every bot must first access the robots.txt file before crawling the website.
Below is the simplest form of robots.txt:
In this case, the instructions apply to all bots (*). There are no crawling restrictions. After creating the robots. txt file, you should save it in the root directory of your website.
If you do not want a specific area of the website to be crawled, you should specify this using a “disallow” in the file.
• Use a robots.txt file to give instructions to search engines.
• Make sure that important areas of your website are not excluded from crawling.
• Regularly check the robots.txt file and its accessibility.
A great tool to assist, Google search console