P4 · Get Cited by AI Tool Update

The honest llms.txt utility check 2026: 10% adoption, 0 confirmed AI engines using it

llms.txt adoption sits at ~10.13% of 300,000 sampled domains. No AI engine has confirmed using it. John Mueller: no AI system currently uses llms.txt. 8 of 9 sites in a Search Engine Land audit saw zero traffic change.

By Billy Reiner Published Updated May 13, 2026 10 min read

llms.txt was proposed by Jeremy Howard in September 2024; the spec lives at llmstxt.org. Adoption sits at roughly 10.13% of 300,000 sampled domains in early 2026. No major AI engine has confirmed using it. John Mueller said publicly that no AI system currently uses llms.txt. Eight of nine sites in a Search Engine Land audit saw zero traffic change. Symbolic future-proofing, not a citation lever.

The most over-hyped technique in the GEO discipline right now is llms.txt. The most under-shipped one is server-rendered schema. The two are correlated. Agencies write a glowing blog post about adding llms.txt to their client’s site, the client checks the URL, sees the file is there, and assumes the AI-citation work is done. The site still does not get cited, because the work that actually moves citation in 2026 was never started.

This article is the honest 2026 utility check on llms.txt. Every number resolves to a primary source in the research register. The framing is the one we hold internally at ConnectEra: ship it, but do not oversell it.

What is llms.txt and what is it not?

llms.txt is a plain-text file at the root of a domain that lists canonical content URLs for AI language models, proposed by Jeremy Howard of Answer.AI on 2024-09-03. The spec lives at llmstxt.org and the AnswerDotAI/llms-txt repo on GitHub; no revision has shipped in 2026. It is not a confirmed retrieval signal — no major AI engine has stated their crawler consumes it — and it is not a substitute for schema, server-rendered HTML, or the answer-capsule format.

llms.txt was proposed as an analog to robots.txt or sitemap.xml: a discoverable, machine-readable file at a fixed location that tells AI engines which content on the site is worth retrieving. The Ahrefs explainer and the GetPublii guide both lay out the same shape — a markdown-ish file with a site description, a curated list of canonical URLs, and optional grouping into sections. The spec is simple and the implementation is cheap. The problem is not the spec.

This article sits inside the technical-depth pillar on getting cited by AI. The pillar covers the four layers that compound — schema, entity graph, freshness, answer capsules — and treats llms.txt as the symbolic layer above all of them. The framing matters because the GEO community keeps mixing structural and symbolic techniques into the same recommendation list. They are not the same.

The 10.13% adoption number and what it doesn’t include

What is the actual adoption rate of llms.txt in 2026?

SE Ranking’s early-2026 sample of 300,000 domains measured a 10.13% adoption rate for llms.txt. The skew is unusual: adoption is higher among medium- and low-traffic sites and lower among authoritative sites. The high-DR domains that produce the citation supply AI engines actually retrieve from are not publishing llms.txt at meaningful rates. The 10.13% headline number is real — what it doesn’t tell you is that the bottom 90% includes most of the sites worth citing.

The skew is the part most write-ups miss. A flat adoption rate of 10% sounds like a slow-but-real diffusion curve. The composition tells a different story. SE Ranking’s 2026 breakdown shows medium- and low-traffic sites driving most of the published files, with authoritative high-DR domains conspicuously absent. The pattern is what you see when a community of early-adopter agencies and SEO tooling vendors push a feature their own clients ship, but the editorial sites and platforms whose content actually populates AI Overview answers do not bother.

Compare this to the diffusion shape of meta-descriptions in 2010 or schema.org in 2014. Both of those started in the high-DR end and worked downward as tooling caught up. llms.txt is doing the opposite, and that shape is consistent with a feature whose perceived value is mostly signalling rather than measurable retrieval lift.

The number to anchor on is not 10.13% in isolation. It is “10.13% of a 300,000-domain sample, weighted toward the bottom 90% of the traffic distribution.” If a site you would actually want cited — a major editorial outlet, a Wikipedia-tier reference, a category-leader brand — does not publish llms.txt, the file is doing less work than the adoption headline implies.

Why John Mueller’s stance matters

What did John Mueller actually say about llms.txt?

John Mueller’s public position is straightforward: no AI system currently uses llms.txt as a retrieval signal. Webflow’s own llms.txt explainer quotes the position alongside their own honest framing of the file’s limits. The Lumentir 2026 analysis (“LLMS.txt: Dead, or Never Existed at All”) reaches the same conclusion from the engine-confirmation angle. Google explicitly does not endorse the file; the spec appearing on some Google-owned properties is a CMS-default artifact, not an endorsement.

Mueller’s stance matters not because Google is the only voice that matters in AI search — it isn’t, and Bing/ChatGPT and Perplexity are independent retrieval pipelines — but because Mueller is the senior public-facing search engineer who has historically called these things straight. When he confirms a signal works, the SEO community cites him for years. When he says no AI system currently uses llms.txt, that is the closest thing to engine confirmation the spec is going to get for 2026.

The corroborating evidence sits in the Search Engine Land audit. The audit ran a controlled implementation of llms.txt across nine sites and measured AI-source traffic, citation count, and AI Overview presence before and after. Eight of the nine sites saw zero traffic change post-implementation. The ninth showed a small uptick that the auditor attributed to coincident schema work, not to llms.txt itself. A 1-of-9 ambiguous result is not a citation lever.

This is the divergence point between the GEO community’s hype curve and the engine reality. Most “we shipped llms.txt and saw results” posts are conflating llms.txt with the full GEO retainer that shipped alongside it — schema, freshness, answer capsules, server-side rendering. The Search Engine Land audit isolated llms.txt as the only change. The result was nothing.

Why we still ship it (the symbolic-future-proofing argument)

If llms.txt does nothing measurable, why ship it?

Three reasons, all narrow. The implementation cost is near zero — a few minutes to author the file and place it at the root. The spec is stable and may activate later as a retrieval signal. Anthropic and OpenAI both publish llms.txt for their own API documentation, which signals vendor familiarity even without consumption commitment. We ship it on every ConnectEra build under one condition: it never displaces the structurally load-bearing work. If a client is choosing between llms.txt and schema completeness, schema wins every time.

The honest case for llms.txt is the cheapness case. A static llms.txt at the root of the domain costs about ten minutes of editorial authoring, persists across deploys, and produces no maintenance burden. If the spec ever activates as a retrieval signal — and it might — the site is already compliant. If it never activates, no harm done. That is a reasonable trade for ten minutes of work.

The dishonest case is treating it as a citation lever in May 2026. The four layers that have measured 2026 citation lift are well-documented. The 40-60 word answer-capsule format earns the citation at the passage level. FAQPage schema wraps the capsule in a machine-readable Q&A block. The entity graph chained through sameAs and knowsAbout anchors the page to a verifiable identity. The 458-day content-freshness premium keeps the page in the retrieval pool. None of those four are symbolic. All four have primary 2026 measurement behind them.

llms.txt sits above this stack as the symbolic future-proofing layer. The phrasing matters: above, not parallel to. A site with all four structural layers and no llms.txt gets cited in 2026. A site with llms.txt and none of the four structural layers does not. The order of work follows the order of measurable lift.

The Anthropic and OpenAI case is interesting precisely because it cuts in two directions. Anthropic publishes llms.txt at docs.claude.com (8,364 tokens) and llms-full.txt (481,349 tokens) for their own API documentation; OpenAI publishes at platform.openai.com/docs/llms.txt. Both vendors understand the spec well enough to author non-trivial files for their own products. Neither has stated their crawlers consume llms.txt from third-party sites as a retrieval signal. Vendor publishing is not vendor consumption. The pattern is consistent with “we like the spec for our own docs and have not committed to using it on the retrieval side.”

Who’s shipping llms.txt correctly: Anthropic, OpenAI, Duda, Framer, Webflow

Which platforms support llms.txt natively in 2026?

Five reference points. Anthropic and OpenAI publish llms.txt for their own API docs. Webflow added manual llms.txt upload via Site settings on 2025-04-08; auto-generation is still on the wishlist (idea WEBFLOW-I-32953). Framer ships native well-known files support including llms.txt on Pro tier and above, and auto-serves a markdown version of every page to AI crawlers. Duda auto-generates llms.txt on every publish, unrestricted by tier. Wix gates llms.txt management to premium eCommerce. WordPress relies on Rank Math (2025) and Yoast (2026) plugins.

The platform layer is where most of the live confusion sits, because llms.txt support varies dramatically by platform and the marketing claims do not always match the documentation.

Webflow added manual llms.txt upload in April 2025, not April 2026. The feature lives in Site settings for users on CMS or Business plans, and the Webflow Help Center upload article is the canonical reference. The shipped feature is upload-only — auto-generation is still a community wishlist item open at Webflow Wishlist idea WEBFLOW-I-32953 as of May 2026. The NextGen CMS general-availability launch on 2026-04-09 is a separate event; llms.txt is not a NextGen-CMS-specific feature, it is site-wide.

Framer ships the most complete implementation in 2026. The well-known files panel supports llms.txt, robots.txt, security.txt, and up to 30 well-known files per project on Pro ($30/mo) and above. The deeper feature is that Framer auto-serves a markdown version of every page at the same URL when the request comes from AI tools, and actively monitors crawl access for GoogleBot, GoogleOther, BingBot, GPTBot, AhrefsBot, and PerplexityBot by default. This is what a platform that took GEO seriously looks like. We cover the full Framer pivot in the Framer GEO 2026 piece.

Duda is the only agency builder that auto-generates llms.txt on every publish, unrestricted by tier. The file includes structured data, live URLs, meta descriptions, and skips drafts and noindex pages. Compared to Wix — which gates llms.txt management to premium eCommerce — Duda’s open-by-default posture is structurally more agency-friendly. The full comparison sits in the Duda llms.txt vs. Wix llms.txt write-up, which covers what shipping llms.txt correctly looks like even if the utility is debated.

WordPress relies on plugins. Rank Math added llms.txt and an AI-search traffic tracker in 2025, in market 2026. Yoast added a generator in the same window. Both are mature implementations; neither moves the WordPress citation ceiling on its own — that ceiling is set by the page-builder stack underneath, which we cover in the WordPress Bricks vs. Elementor GEO breakdown.

The pattern across all five reference points is the same. Platforms that ship llms.txt correctly — Framer, Duda, Anthropic’s and OpenAI’s docs sites — also ship the structural layers that actually move citation. The platforms that lead with llms.txt as the marketing headline (Wix’s premium-eCommerce gating, the more breathless agency posts about Webflow’s April 2025 upload) tend to be papering over weaker structural support underneath.

The honest framing for 2026

If you are deciding whether to ship llms.txt on your site this quarter, the honest framing is short.

Ship it if the cost is near zero. A static file at the root, ten minutes to author, no ongoing maintenance burden. There is no reason not to.

Do not displace structural work to ship it. If shipping llms.txt costs a developer two hours that would otherwise go to schema completeness, the developer should ship the schema. The schema has measured 2026 citation lift; llms.txt does not.

Do not market it as a citation lever. Most agency posts that lead with “we added llms.txt and traffic increased” are conflating the file with the full GEO retainer that shipped alongside it. The Search Engine Land audit isolated llms.txt and saw 8-of-9 zero-change. That is the cleanest evidence available, and it points one direction.

Track the spec. If OpenAI, Anthropic, Google, or Perplexity ever publicly confirms their crawler consumes llms.txt as a retrieval signal, the calculus changes immediately. Until then, the file is symbolic future-proofing, and that framing is the one your retainer should use with clients.

The technique stack that compounds in 2026 is well-defined, and llms.txt is not load-bearing in it. The four layers — the answer capsule, the FAQ schema, the entity graph, the freshness mechanic — are where the citation lift comes from. llms.txt sits above them as a free option on a future signal. Treat it that way.

Run a ConnectEra GEO audit on your site — we ship llms.txt as a default on every build, and we tell clients exactly what it does and does not do. The structural work that moves citation in 2026 is what the retainer is for; the symbolic file is what we add for free on top.

Frequently asked questions

Does llms.txt actually drive any AI citations in 2026?
No major AI engine has confirmed using llms.txt as a retrieval signal as of May 2026. SE Ranking's early-2026 sample of 300,000 domains measured a 10.13% adoption rate. Search Engine Land audited nine sites that shipped llms.txt; eight saw zero traffic change post-implementation. Anthropic and OpenAI both publish llms.txt files for their own API documentation, but vendor publishing is not the same as the vendor's crawler consuming the file from your site. We still ship llms.txt on every ConnectEra build because the cost is near zero and the spec may activate later — but we frame it as symbolic future-proofing, not as a citation lever.
Why is John Mueller against llms.txt?
John Mueller's public stance is that no AI system currently uses llms.txt as a retrieval signal — not that the spec is bad in principle, but that no engine has confirmed consuming it. Webflow's own llms.txt explainer post quotes Mueller's position alongside their own honest framing of the file's limitations. The Lumentir 2026 analysis ('LLMS.txt: Dead, or Never Existed at All') reaches the same conclusion from the engine-confirmation angle. Google explicitly does not endorse llms.txt; the file appearing on some Google-owned properties is a CMS-default artifact, not an endorsement.
Should I still ship llms.txt if engines aren't using it?
Yes — if the cost is near zero. The honest case for llms.txt in 2026 is symbolic future-proofing. The spec is stable, Anthropic and OpenAI are publishing for their own docs, and the file may become a retrieval signal at some future date. The dishonest case is treating it as a citation lever today. The structurally load-bearing techniques in 2026 are server-rendered schema, attribute-rich entity graphs, the 40-60 word answer-capsule format, and content freshness. llms.txt is the layer above all of them — present, harmless, possibly useful one day, demonstrably not moving traffic this quarter.
Will OpenAI or Anthropic ever confirm using llms.txt?
Possibly, but no public roadmap commits to it. Anthropic publishes llms.txt at docs.claude.com (8,364 tokens) and llms-full.txt (481,349 tokens) for their own API documentation; OpenAI publishes at platform.openai.com/docs/llms.txt. Both vendors clearly understand the spec well enough to publish it for themselves. Neither has stated their crawlers consume llms.txt as a retrieval signal from third-party sites. Until that confirmation arrives, treat it as a future option, not a current lever — and route the implementation effort to the layers that have measured 2026 lift today.

Written by

Founder · ConnectEra

Billy builds AI-citable sites for practices, advisors, and B2B SaaS. Over 80 migrations in the last 18 months — every one with a live audit, a fixed price, and a 7-day rebuild.

When you're ready

Ready to be the page ChatGPT cites?

Tell us where your site is at. You get back your free growth plan — your platform blocker, your industry's citation gap, and the next move. Yours to keep, whether you hire us or not.

Get my free growth plan

Your free growth plan

Tell us where your business is at.
You get back your free growth plan — yours to keep, whether you hire us or not.