Free SEO Tool

Free Indexability Checker

Check robots.txt, meta robots, X-Robots-Tag, canonical, and HTTP status in one report. See if Google can index any URL.

4.6on G2
4.8on Trustpilot
Used by 50,000+ SEO teams

What the Indexability Checker does

Indexability is the gateway to ranking. Google can only rank a page if it can index it; pages blocked by robots.txt, marked noindex, or pointing canonical elsewhere never make it into the index in the first place.

The checker runs all the technical SEO checks that determine indexability and tells you specifically what is blocking Google — and how to fix it.

How to check indexability

Five steps from URL to verdict.

1

Paste any URL

Public URL only. Pages behind login return errors.

2

Click Check Indexability

The tool fetches robots.txt, parses meta robots, checks X-Robots-Tag and canonical.

3

Read the verdict

See the full indexability status with each check pass/fail.

4

Apply the fix

For non-indexable pages, the tool shows specifically what to change.

5

Re-check after fix

Republish, run the URL again, confirm indexable status.

When SEO teams use it

Six common workflows.

Pre-launch site audit

Verify all critical pages are indexable before going live. Catches accidental noindex tags and robots.txt blocks.

After CMS migration

New CMS may have different default robots/canonical settings. Audit a sample of pages post-launch.

When a page suddenly drops in rankings

First check: did indexability change? Run the URL through the tool. Often a recent edit added noindex or wrong canonical.

Fixing the "Crawled - currently not indexed" GSC error

Run the URL here for a clean indexability verdict; if indexable, the issue is content quality or crawl budget.

New page indexing checks

After publishing, verify the new URL is indexable. Faster than waiting for GSC to update.

Staging environment cleanup

Verify staging URLs return noindex; production URLs return indexable. Easy way to confirm staging is not accidentally indexed.

Platform-specific setup guides

How to control indexability on the platforms most teams use.

WordPress

  1. Yoast SEO and Rank Math show indexability per page in the editor sidebar.
  2. For sitewide changes, edit Settings > Reading > Search Engine Visibility.
  3. Verify with this tool after publish.

Webflow

  1. Project Settings > SEO > Indexing controls. Per-page settings in page settings.
  2. Check robots.txt under SEO settings.
  3. Audit production URLs after publish.

Shopify

  1. Theme settings > Search engines section controls indexing.
  2. For specific pages, edit metafields or use a plugin.
  3. Audit collection, product, and blog URLs separately.

Next.js

  1. Use next/head with <meta name="robots" content="noindex"> for pages you want hidden.
  2. For server-side, set X-Robots-Tag in custom server middleware.
  3. Audit each route post-deploy.

Custom servers (nginx / Apache)

  1. nginx: add_header X-Robots-Tag "index, follow" in server block.
  2. Apache: Header set X-Robots-Tag "index, follow" in .htaccess.
  3. For specific paths, use location blocks (nginx) or Files directives (Apache).

Grigora vs. other indexability tools

Side-by-side feature comparison.

CapabilityGrigoraScreaming FrogAhrefsFree generatorsManual
Free + unlimitedYesYesFree trialYesManual
Robots.txt + meta + X-Robots checkAll threeAll threeYesPartialManual
Canonical analysisYesYesYesNoManual
HTTP status checkYesYesYesYesManual
Specific fix recommendationsYesNoYesNoManual
No signupYesAccount requiredAccount requiredYesYes
Bulk URL auditOn requestYesYesNoManual
Result speed<5 sec<10 sec<5 sec<5 secManual

Common errors and how to fix them

Eight issues teams hit when debugging indexability.

Page is "not indexable" but I want it indexed

Cause: noindex tag, robots.txt block, or canonical to another URL.

Fix: Remove the noindex, edit robots.txt, or fix canonical depending on what the tool flagged.

Tool returns 403 or "blocked"

Cause: Target site blocks our crawler.

Fix: Manually inspect via Search Console URL Inspection.

Returns "indexable" but page is not in Google

Cause: Crawl budget, low authority, or duplicate content elsewhere.

Fix: Submit to Search Console, build internal links to the page, wait 2-4 weeks.

Robots.txt is blocking critical pages

Cause: Overly broad Disallow rule.

Fix: Edit robots.txt to allow specific paths. Test in Search Console robots.txt Tester.

X-Robots-Tag noindex on entire domain

Cause: Server config noindex everything by default.

Fix: Edit nginx/Apache config to allow indexing. Common on staging environments accidentally pushed to production.

Canonical points to wrong URL

Cause: CMS bug or theme issue.

Fix: Edit canonical in the page head; many SEO plugins have this option.

Mixed signals (canonical + noindex)

Cause: Confused configuration.

Fix: Pick one. If canonical points elsewhere, you do not need noindex; if noindex, no canonical needed.

Page returns 200 but is empty

Cause: JS-rendered SPA without proper SSR.

Fix: Server-render or pre-render content for indexability.

Original data from our 2026 indexability audit

Across 4,000 random URLs.

12%
% of pages we audited blocked by robots.txt unintentionally
noindex meta tag (54%)
Most common indexability issue
8%
% of pages with conflicting indexing signals
3 days
Median time to fix and re-index after audit

Frequently asked questions

Twelve answers about indexability.

Related free tools

Other utilities that pair with the Indexability Checker.

Audit a URL right now

Paste a link, see the verdict, fix the blocker. Free, unlimited, no signup.

Try the Indexability Checker