Indexability Checker
Instantly verify whether Google can crawl and index your page. Check meta robots, X-Robots-Tag, canonical tag, robots.txt and HTTP status.
What the Indexability Checker Verifies
Meta Robots tag
Reads
<meta name="robots">
and flags noindex or nofollow directives.
X-Robots-Tag header
Checks the HTTP response header for server-level noindex instructions that don't appear in the HTML source.
Canonical tag
Validates the canonical URL and detects mismatches or cross-domain canonicals that attribute content elsewhere.
robots.txt
Fetches and parses robots.txt, evaluating Allow/Disallow rules for User-agent: * against the scanned path.
HTTP Status
A 4xx or 5xx status means the page is unreachable — search engines cannot index what they cannot fetch.
Redirect chain
Tracks every redirect hop to the final URL. Long chains dilute PageRank and slow down crawling.
Frequently Asked Questions
What makes a page not indexable?
A page may be blocked from indexing by a noindex meta tag, an X-Robots-Tag header, robots.txt Disallow rules, a 4xx HTTP status, or a canonical pointing to another URL.
What is meta robots?
A meta robots tag is placed in the HTML
<head>
to instruct crawlers how to handle the page. Common values: noindex, nofollow, index, follow.
robots.txt vs meta robots — what's the difference?
robots.txt controls access at the crawl level — if a path is disallowed, the page won't be fetched. Meta robots works at the indexing level — the page can be fetched but not indexed.
Does a canonical tag affect indexability?
A canonical pointing to a different URL tells Google to index the canonical destination instead. The current page may still be crawled but its content is attributed to the canonical URL.
Also want to verify a backlink?
Check dofollow status, anchor text and link quality score.