Robots.txt Tester

Test robots.txt rules against any URL path — see which rule matches, crawl-delay, and sitemap URLs

A robots.txt tester lets you verify that your robots.txt rules correctly allow or block specific URL paths for any crawler. Paste your robots.txt, choose a user-agent like Googlebot, enter a URL path, and instantly see the result with the matching rule highlighted — so you know exactly why a page is crawlable or not.

Robots.txt Content

Test Parameters

Enter the path without the domain (e.g. /blog/post)

How to Use the Robots.txt Tester

A robots.txt tester is essential before deploying a new robots.txt file or diagnosing why Google is not crawling certain pages. Even a small mistake — like an extra slash or a wildcard in the wrong place — can accidentally block your entire site. This tool lets you verify every rule against real URL paths before it goes live.

Step 1: Paste Your Robots.txt Content

Copy the contents of your robots.txt file and paste it into the text area on the left. If you do not have one yet, click "Load sample" to see a well-structured example. The parser reads the file in real time as you type, so you can edit rules and immediately see how they affect your test.

Step 2: Select a User-Agent

Choose the crawler you want to test from the dropdown: Googlebot, Bingbot, GPTBot (ChatGPT's crawler), ClaudeBot (Anthropic's crawler), the wildcard (*) that applies to all bots, or enter a custom user-agent name. The tester correctly implements the fallback logic — if there is no specific block for Googlebot, it falls back to the wildcard (*) rules.

Step 3: Enter the URL Path

Type the URL path you want to test (without the domain), for example /admin/login or /blog/2026/my-post/. The tester evaluates all matching Allow and Disallow rules for the selected user-agent and determines the result using Google's precedence rules: the most specific (longest) matching rule wins.

Step 4: Review the Result and Matching Rule

The result box shows Allowed (green) or Blocked (red) instantly, along with the exact rule that determined the outcome. The parsed robots.txt view below highlights all rules applicable to the selected user-agent. This makes it easy to spot conflicting rules or overly broad Disallow patterns.

Step 5: Check Sitemaps, Crawl-delay, and Warnings

The tester also extracts Sitemap URLs declared in your robots.txt, displays any Crawl-delay values (and notes which bots respect them), and flags common configuration issues. Warnings include accidentally blocking your entire site, missing sitemaps, blocking CSS/JS files, and other issues that can harm your search rankings or crawl budget.

Frequently Asked Questions

Is this robots.txt tester free to use?

Yes, this robots.txt tester is completely free with no limits. You can test as many rules and URLs as you need without signing up or paying anything. All processing happens entirely in your browser — your robots.txt content is never sent to a server.

Is my robots.txt content safe when using this tool?

Absolutely. Everything runs client-side in your browser using JavaScript. Your robots.txt content and the URLs you test are never sent to any server, never logged, and never stored. You can safely test rules for private or unreleased websites.

Which robots.txt directives does this tester support?

This tester supports the standard directives: User-agent, Allow, Disallow, Crawl-delay, and Sitemap. It handles wildcard (*) patterns in paths, end-of-string anchors ($), and user-agent fallback from specific bot to the wildcard (*) block. It does not support the rarely used Request-rate directive.

Why does the tester show Allowed even though I have a Disallow rule?

In robots.txt, a more specific Allow rule takes precedence over a less specific Disallow rule when both match the same path. For example, if you have Disallow: /admin/ and Allow: /admin/public/, then the path /admin/public/page.html is Allowed because the Allow rule is longer and more specific. This is how Google, Bing, and most major crawlers handle conflicting rules.

What is the difference between Googlebot and the wildcard user-agent?

The wildcard (*) user-agent in robots.txt applies to all crawlers that do not have a specific block targeting them. Googlebot is a specific user-agent. When testing, if there is a Googlebot block, those rules take precedence for Googlebot. If there is no Googlebot-specific block, Googlebot falls back to the wildcard (*) rules. This tester correctly implements this fallback logic.

What is Crawl-delay and which bots respect it?

Crawl-delay tells a crawler how many seconds to wait between requests to your server. Bing and many smaller crawlers respect this directive. Google does not respect Crawl-delay in robots.txt — instead, you can set Google's crawl rate in Google Search Console. If a Crawl-delay is set, this tester displays the value with a note about which bots support it.

How do I test if my robots.txt is blocking Google?

Enter your full robots.txt content in the text area, select 'Googlebot' as the user-agent, then enter the URL path you want to check (for example, /admin/ or /private/page.html). The tester will show whether that path is Allowed or Disallowed for Googlebot, and highlight the specific rule that made that determination. You can also use Google Search Console's URL Inspection tool for live verification.

What are common robots.txt mistakes this tool can detect?

This tester warns about several common mistakes: accidentally blocking the entire site with 'Disallow: /', using case-sensitive paths that may not match your actual URLs, missing the Sitemap directive, having rules for non-existent user-agents, and using invalid directive names. It also highlights if your Disallow rule would block CSS or JavaScript files which can harm SEO.