P4 · Get Cited by AI Tool Update

The AI citation entity graph 2026: sameAs, knowsAbout, hasCredential, areaServed

Person + hasCredential + knowsAbout + sameAs raises entity-confidence in AI citation. Schema App's case study added spatialCoverage + audience + sameAs and lifted impressions 46% and clicks 42% on non-branded queries.

By Billy Reiner Published Updated May 13, 2026 11 min read

The AI citation entity graph chains Person, hasCredential, knowsAbout, and sameAs into an authoritative graph linked to Wikidata, LinkedIn, ORCID, and named credentialing bodies. Schema App documented 46% more impressions and 42% more clicks on non-branded queries from spatialCoverage, audience, and sameAs additions. Tier 1 schema completion lifts AI Overview appearance up to 40% with a 2.5 times higher citation chance.

The biggest shift in 2026 GEO is that AI engines stopped citing pages and started citing entities. A page is a URL the engine retrieved. An entity is a node in a graph the engine resolves through Wikidata, LinkedIn, ORCID, and the credentialing-body roster, then attaches to the page as the authoritative source on the topic. If your site has a great page but no entity behind it, the engine cites a competitor whose entity graph is denser even when their page is thinner.

This article is the technical-depth piece on the entity graph properties that actually move AI citation in 2026. Every number resolves to a primary research source. The four properties — sameAs, knowsAbout, hasCredential, areaServed — are the load-bearing ones, and the rich-result-targeted properties most agencies still optimize for are not.

What is the AI citation entity graph in 2026?

The AI citation entity graph is the chained set of schema.org properties that link a Person or Organization node on your site to authoritative external nodes — Wikidata, LinkedIn, ORCID, credentialing bodies, podcasts, GitHub. The four load-bearing properties in 2026 are sameAs (the link chain), knowsAbout (the topic vector), hasCredential (the verifiable claim), and areaServed (the spatial vector). Together they raise entity-confidence in AI Overview citation, knowledge-panel display, and author-attribution ranking.

The shift this represents is mechanical, not philosophical. AI engines extract a passage from a page, then ask whether the entity behind the page is authoritative on the topic of the passage. The engine resolves that authority by walking the entity graph: does the source node link to credentialing bodies through sameAs? Does the knowsAbout property actually match the topic of the passage? Does the hasCredential survive the cross-check against the credentialing body’s own roster? When the answers come back yes, the citation lands. When they come back no, the engine selects a different source — usually one that ranks lower but resolves to a denser entity.

This compounds with everything else in the technical citation pillar. Schema scaffolds the entity graph; the entity graph scaffolds the answer capsule; the capsule scaffolds the citation. Each layer reinforces the next, and the entity graph is the layer that decides whether the page even enters the citation candidate pool.

The four properties that move citation

Which entity graph properties actually move AI citation in 2026?

Four. sameAs is the link chain — Wikidata, LinkedIn, ORCID, credentialing-body directory, podcast or GitHub. knowsAbout is the topic vector the engine maps the entity to for query-to-entity matching. hasCredential is the verifiable claim that survives an E-E-A-T pass against the issuing body’s roster. areaServed is the spatial vector that anchors local and regional citations. Lead Gen Economy’s 2026 E-E-A-T research identifies these as the chain that raises entity-confidence; Schema App’s case study isolated three of them and measured a 46% impression lift on non-branded queries.

Each property is doing different work, and the work is non-substitutable. Adding more sameAs links does not compensate for an empty knowsAbout. Adding hasCredential without a sameAs back to the issuing body fails the verification cross-check. The chain is the unit, not any single property.

sameAs — the link chain. sameAs takes an array of URLs that point at other surfaces representing the same entity. For a Person, that means LinkedIn, ORCID, the Wikidata Q-ID, the credentialing body’s directory, podcast host pages, and the GitHub or industry-specific profile (RealSelf for cosmetic providers, NAPFA roster for fee-only advisors). For an Organization, that means the LinkedIn company page, the trade association roster (AACD for cosmetic dentists, ASPS for plastic surgeons), and the platform-of-record review profile (G2 for B2B SaaS, Google Business Profile for local). The chain works because each target node references the source — LinkedIn lists your firm URL, the credentialing body lists your name — and the engine treats the bidirectional reference as verification.

knowsAbout — the topic vector. knowsAbout names the topics the entity is authoritative on. The mistake most agencies make is to populate it with marketing keywords (“financial planning”, “wealth management”). The engine cross-checks the values against the actual content of the pages on the site. A Person with knowsAbout=“estate planning for physicians” needs to have published material on it; otherwise the property reads as a claim without evidence. When the values resolve cleanly to topic clusters with on-site content, knowsAbout becomes the vector the engine uses to match the entity to a query.

hasCredential — the verifiable claim. hasCredential takes an EducationalOccupationalCredential object with credentialCategory and recognizedBy fields. recognizedBy points at the issuing body’s Organization node, and that node should include a sameAs to the body’s canonical URL. The engine resolves the credential by walking from the Person through hasCredential to the issuing body, then cross-checks whether the body’s directory lists the Person. The credentials that resolve cleanly in 2026 are board certifications (American Board of Plastic Surgery, American Board of Cosmetic Surgery) and professional designations (CFP, CFA, CPA). Marketing certifications and self-issued credentials fail the cross-check.

areaServed — the spatial vector. areaServed takes a Place, AdministrativeArea, or GeoShape value that names the geographic scope of the entity. For a med-spa, that is the metro plus surrounding counties. For a financial advisor, it is the SEC-registration footprint or the NAPFA/XYPN service area. For a B2B SaaS company, it is the markets explicitly served. Schema App’s case study added spatialCoverage and audience along with sameAs and measured 46% more impressions and 42% more clicks on non-branded queries. areaServed is the property AI engines reliably parse in 2026, and the one we ship.

The four properties matter because each resolves a different verification step the engine runs. The rich-result-targeted properties most agencies optimize for — Article headline, datePublished, author name as a string instead of as a Person reference — do not enter the entity graph. They produce a Google rich result without raising entity-confidence in the AI citation pass.

The Schema App 46% impression lift

What did Schema App's 46% impression lift case study actually measure?

Schema App added spatialCoverage, audience, and sameAs properties to an Organization node and measured 46% more impressions and 42% more clicks on non-branded queries (Stackmatix Organization Schema Markup 2026). The lift is on non-branded queries specifically — the queries where AI engines have to choose between competing entities rather than retrieving by exact-name match. The three properties added together were the unit; isolating any one would not have produced the same effect because the entity graph compounds rather than adds.

The 46% number is the most quoted entity-graph lift in 2026 GEO research and the one to anchor on. The mechanism behind it is the same mechanism behind every entity-graph effect: AI engines run a query-to-entity match before they run a query-to-passage match. A query like “med-spa Botox in Austin” first resolves to candidate entities (which med-spas have areaServed=Austin and offer the procedure) before it resolves to candidate passages on those entities’ pages. The pages whose entities pass the first match enter the citation candidate pool; the pages whose entities don’t pass it never get evaluated for passage extraction.

This is why entity richness compounds with rank rather than replacing it. A page that ranks #4 with a dense entity graph beats a page that ranks #2 with a thin one on the citation pass — but a page that does not rank at all has no entity-graph density that matters.

Tier 1 is the second number to anchor on. Stackmatix’s 2026 structured-data analysis measured up to 40% more AI Overview appearances and a 2.5× higher chance of appearing in AI-generated answers for sites with complete Tier 1 schema (Organization, WebSite, BreadcrumbList, primary entity). Tier 1 is the floor; the entity graph (sameAs / knowsAbout / hasCredential / areaServed) is what we layer on top to convert AI Overview appearance into actual citation. Growth Marshal’s February 2026 study — the same study that controls for rank position — measured 61.7% citation rate on attribute-rich schema versus 41.6% on generic schema, with the gap widening to 54.2% versus 31.8% on low-authority sites (DR ≤ 60). The 20-point gap is the entity-graph effect made measurable.

Why entity richness alone doesn’t beat rank position

Why does entity richness alone not predict AI citation?

Because Growth Marshal’s February 2026 model isolated the variables and measured rank position at OR=0.762 per position with p<.001 — strongly significant. Entity-richness score alone tested at OR=1.001 with p=0.833 — statistically null. Entity richness is necessary but not sufficient. Without organic rank, schema does not predict citation. Generic Article and BreadcrumbList schema actually tested with a negative odds ratio (OR=0.678, p=0.296) once rank was controlled. The entity graph is a multiplier on rank, not a substitute.

The OR=1.001 number is the corrective the 2026 GEO industry needs more than any other. The entity graph compounds with rank; without rank, the graph is mathematically inert in the citation model.

What this means operationally: the entity-graph layer is work you do in addition to organic rank work, not instead of it. A med-spa that ranks #1 on “Botox Austin” with a thin entity graph will still get cited some of the time. A med-spa that ranks #15 with a perfect entity graph will get cited approximately none of the time. A med-spa that ranks #4 with a perfect entity graph will get cited disproportionately often relative to the #2 and #3 results that ship generic schema.

We sequence the work accordingly: organic rank fundamentals first (server-rendered HTML, your platform’s schema cap is the ceiling on this entity-graph, real content, working internal linking), then the entity-graph layer, then the answer capsule and FAQPage schema on top. Each layer needs the one below it to compound.

The vertical entity-graph templates ConnectEra ships

The four properties produce different chains depending on the vertical, and the difference is not cosmetic. A med-spa entity graph that is correct for a financial advisor would fail every cross-check the engine runs. We ship three vertical templates as the baseline, then customize.

Med-spas and plastic surgeons — Person + Physician + hasCredential. MedicalBusiness as the Organization node with areaServed (the metro), medicalSpecialty, and a sameAs chain to Allergan AlleAccess, RealSelf, and the state cosmetic-medicine board. One Physician node per provider with hasCredential pointing at the board-certification Organization — American Board of Cosmetic Surgery for cosmetic providers, American Board of Plastic Surgery for surgeons. knowsAbout lists the procedures the physician actually performs. sameAs chains to LinkedIn, the credentialing body’s roster (the American Society for Aesthetic Plastic Surgery directory is a high-weight target), and any peer-reviewed publication on ORCID. The Person + Physician + hasCredential layer is what makes the cited result name the doctor instead of the brand — and the named-source effect is real (pages with at least one named-source citation get cited 2.1× more, per Digital Applied’s 1,000 AI Overviews study).

B2B SaaS founders — Organization + sameAs LinkedIn / podcast / Twitter / GitHub. Organization node with areaServed (the markets explicitly served, with audience for the buyer ICP) and a sameAs chain to LinkedIn, Crunchbase, and G2 / Capterra. Person nodes for the founders with hasCredential pointing at degrees and notable past employers (modeled as alumniOf), knowsAbout listing the technical topics, and sameAs to LinkedIn, podcast appearances, Twitter/X, and GitHub. The podcast sameAs is the underrated one — podcast host pages with the founder name in the show notes feed the AI citation graph through a different surface than written content does, and they resolve to dense entities because every host has their own knowledge-panel pull.

Financial advisors — Person + areaServed. FinancialService as the Organization node with areaServed (the SEC-registration footprint or NAPFA/XYPN service area — areaServed is load-bearing here because AI engines parse advisor queries by geography first) and a sameAs chain to NAPFA, XYPN, FPA, Fee-Only Network, and the SEC IAPD lookup. One Person node per advisor with hasCredential pointing at CFP Board (and CFA Institute or equivalent), knowsAbout listing the actual planning specializations (not generic “wealth management”), and sameAs to LinkedIn, Wealthtender, and any FA Magazine byline. The hasCredential cross-check on advisors is uniquely strict because CFP Board and CFA Institute maintain public rosters that resolve cleanly. A claimed CFP that doesn’t appear in the CFP Board lookup fails verification. FINRA’s 2026 Regulatory Oversight Report explicitly calls out GenAI supervision, so the advisor entity graph must also survive a FINRA principal-pre-approval review — we engineer authority signals only, never performance claims.

The three templates share structure but the cross-check surfaces are vertical-specific. The trade-association sameAs and the board-certification hasCredential are what the AI engines actually pull on; getting the targets wrong means the chain looks dense in source but resolves to nothing the engine can verify.

What this layer compounds with

The entity graph is not the only structural layer that decides AI citation. It is the one that decides whether the page enters the candidate pool, but three other layers decide what happens once it does.

The capsule. The 40-60 word answer capsule is the extracted unit; the entity graph is the verification scaffold around it. The capsule wins the lexical match against the query; the entity graph wins the authority match against the candidate sources. Both have to land for the citation to fire.

The FAQ. FAQPage schema wraps the buyer questions in a machine-readable Q&A block. Frase’s 2026 measurement put FAQPage citation rate at 67% on directly question-shaped queries. The mechanism is FAQ → Knowledge Graph entity strength → AIO citation, not direct schema lift — the FAQ is feeding the entity graph, not winning citation independently of it.

Freshness. The 458-day freshness premium is the multiplier on top of everything. ChatGPT cites pages 458 days newer than Google’s organic median, with 76.4% of most-cited pages updated within 30 days. The entity graph anchors the page; freshness keeps it in the pool of pages the engine considers in the first place.

The structurally load-bearing layers (capsule, schema, entity graph, freshness) ship together and compound. The symbolic layers — llms.txt for example — ship alongside but do not carry citation weight on their own. The honest split between symbolic and structural is the foundation of the technical pillar; the entity graph sits on the structural side.

Run a ConnectEra GEO audit on your site — we map your current entity graph against the four-property baseline, identify the cross-checks that fail, and ship the sameAs / knowsAbout / hasCredential / areaServed buildout alongside the schema and capsule layers in a single retainer cycle.

Frequently asked questions

Which schema properties matter most for AI citation in 2026?
Four properties carry disproportionate weight: sameAs (the chain that links your Organization or Person node to Wikidata, LinkedIn, ORCID, and credentialing bodies), knowsAbout (the topic vector the engine maps the entity to), hasCredential (the verifiable claim that survives an E-E-A-T pass), and areaServed (the spatial vector for local citations). Schema App's case study isolated spatialCoverage + audience + sameAs and measured 46% more impressions and 42% more clicks on non-branded queries. Generic Article and BreadcrumbList properties do not move citation rate once rank is controlled (Growth Marshal February 2026, OR=0.678 with p=0.296).
Does sameAs to LinkedIn really lift my entity graph?
Yes, but only as part of a chain. A single sameAs link reads as a one-way claim. A chain — sameAs to LinkedIn, ORCID, Wikidata, the credentialing body's directory, and a podcast or GitHub profile — reads as a verified entity because each target node also points back at the source through its own surface (a LinkedIn URL on the firm site, an ORCID author byline, a credential roster). Schema App documented 46% more impressions on non-branded queries when spatialCoverage, audience, and sameAs were added together. The Person schema with hasCredential plus knowsAbout plus sameAs is the chain Lead Gen Economy's 2026 E-E-A-T research identifies as raising entity-confidence in AI Overview citation.
Why doesn't entity richness alone beat rank position?
Because Growth Marshal's February 2026 model (n=1,006 pages, 730 citations across ChatGPT and Gemini) measured rank position at OR=0.762 per position with p<.001, while entity-richness score alone tested at OR=1.001 with p=0.833 — statistically null. Entity richness is necessary but not sufficient. Without organic rank, schema does not predict citation. The right read is that schema is a multiplier on rank, not a substitute. Pages that rank in the top 5 with attribute-rich schema get cited at 61.7%; pages that rank in the top 5 with generic schema get cited at 41.6%. Pages that don't rank get cited at neither rate.
What does the med-spa entity graph look like end-to-end?
MedicalBusiness as the Organization node with areaServed (the metro), medicalSpecialty (Aesthetic Medicine), and a sameAs chain to Allergan AlleAccess, RealSelf, and the state cosmetic-medicine board. One Physician node per provider with hasCredential pointing at the board-certification entity (American Board of Cosmetic Surgery, American Society for Aesthetic Plastic Surgery), knowsAbout listing the procedures actually performed, and sameAs to LinkedIn, the credentialing body's roster, and any conference faculty page. One MedicalProcedure node per service with areaServed and audience. FAQPage wrapping the buyer questions. Tier 1 schema (Organization, WebSite, BreadcrumbList, primary entity) lifts AI Overview appearance up to 40% per Stackmatix's 2026 measurement; the Person + Physician + hasCredential layer is what makes the cited result name the doctor instead of the brand.

Written by

Founder · ConnectEra

Billy builds AI-citable sites for practices, advisors, and B2B SaaS. Over 80 migrations in the last 18 months — every one with a live audit, a fixed price, and a 7-day rebuild.

When you're ready

Ready to be the page ChatGPT cites?

Tell us where your site is at. You get back your free growth plan — your platform blocker, your industry's citation gap, and the next move. Yours to keep, whether you hire us or not.

Get my free growth plan

Your free growth plan

Tell us where your business is at.
You get back your free growth plan — yours to keep, whether you hire us or not.