What is Robots in regards to SEO?
Robots are applications that “crawl” through websites, documenting (i.e. “indexing”) the information they cover. In regards to the Robots.txt file, these robots are referred to as User-agents.
You may also hear them called:
Spiders
Bots
Web Crawlers
These are not the official User-agent names of search engines crawlers. In other words, you would not “Disallow” a “Crawler”, you would need to get the official name of the search engine (the Google crawler is called “Googlebot”).
This text file contains “directives” which dictate to search engines which pages are to “Allow” and “Disallow” search engine access.
Blogger
User-agent: *
Allow: /
Sitemap: http://example.blogspot.com/feeds/posts/default?orderby=UPDATED
WordPress
User-Agent: *
Disallow: /wp-admin/
Sitemap: https://example.com/sitemap_index.xml
Custom Website
User-agent: *
Allow: /
Allow: /sitemap.htm
Sitemap: https://example.com/sitemap.xml