Create a properly formatted robots.txt file for your website. Control which pages search engines can crawl and index.
Configure your rules and click "Generate Robots.txt"
A robots.txt file tells search engine crawlers which pages or sections of your site they can or cannot request. It's placed at the root of your website (e.g., https://example.com/robots.txt) and is one of the first files crawlers check before indexing your content.
User-agent: Specifies which crawler the rules apply to. Use * for all crawlers or specify individual bots like Googlebot.
Disallow: Tells crawlers not to access certain paths. Disallow: /admin/ blocks the /admin/ directory.
Allow: Overrides a Disallow rule for specific paths within a blocked directory.
Sitemap: Points crawlers to your XML sitemap for better discovery of your pages.
Many website owners now block AI training crawlers like GPTBot (OpenAI), CCBot (Common Crawl), and others. Use the "Block AI Crawlers" preset to add these rules automatically.