The Haute MD/5WPR Medical Aesthetics AI Visibility Index released April 25, 2026 was the first published audit of AI citation share in the aesthetics category. It ranked 25 brands across ChatGPT, Claude, Perplexity, and Google AI Overviews using more than 60 queries, with Botox, Juvéderm, CoolSculpting, SkinCeuticals, and Morpheus8 in the top five. The index covered $22 billion of global market scope and was a credible, citable benchmark.
It also did not name a single surgeon.
That gap is the reason this report exists. The buyer who types “best deep plane facelift surgeon over 50 patients” or “top breast augmentation surgeon under $10K Atlanta” is not searching for a brand. They are searching for a person, in a metro, with a constraint. The named-surgeon citation surface that answer engines populate from has never been published. The Citation Theft Report — Wednesday cadence, free, methodology open — is the first scan to publish it.
What is the Citation Theft Report?
A weekly scan of named-surgeon citation share across 15 US metros and 8 plastic surgery procedure families against ChatGPT, Claude, Perplexity, and Google AI Overviews. The output is a per-metro, per-procedure leaderboard of which surgeons AI engines actually name in response to high-intent buyer prompts. Approximately 12 named surgeons claim 70% of citations across the 25-prompt set (ConnectEra projected baseline, May 2026).
This is the cluster piece beneath the plastic surgeon AI visibility playbook, which frames the five authority surfaces, the regulatory hooks, and the schema package that decide whether a surgeon is even legible to an AI engine. The report below is the data layer that runs on top of that legibility.
Why brand-level citation studies miss the named-surgeon gap
The Haute MD index measured brands because brands were the sponsors of the audit. Allergan and AbbVie pay for the visibility into how their product lines surface inside ChatGPT’s answer to “best wrinkle treatment for someone in their 40s.” That is a legitimate question and a useful answer. It is not the question a patient runs sixty days before a $14,000 breast augmentation.
The 25 plastic surgery prompts that appear in the verified prompt set published in PRO Star SEO’s 2026 plastic-surgery search analysis and the cross-platform query data underneath the Aesthetic Surgery vertical are explicit named-doctor queries. They name the procedure. They name the metro. They name a constraint — under $10K, takes CareCredit, low complication rate, ASPS member, over 50 patients. AI engines answer those prompts with named surgeons, not with techniques or with brand names.
The volume sits underneath. The 2024 ASPS Statistics Report counted 306,196 breast augmentations in a single year [LEGACY-2024 — most recent ASPS comprehensive report, but the procedure-volume floor it sets has not contracted in 2026 reporting]. ASPS reports an average breast augmentation surgeon fee of $4,875, with all-in patient pay $6,000 to $14,000. Surgical procedures span $3,000 to $15,000 across the category. Noninvasive fat reduction is up 77% versus 2019; neuromodulator injections are up 73%; hyaluronic acid fillers are up 70%. None of that volume converts on a brand page. It converts on a page that names a specific surgeon.
The brand-level studies miss this by design. The named-surgeon scan exists because the design didn’t run.
The 25-prompt × 15-metro Citation Theft methodology
The methodology has six pieces. Every piece is open — a surgeon who cites the share number can run the same prompt themselves, on the same engine, on the same day, and verify or refute it.
The prompt set. Twenty-five prompts derived from the 15 verified buyer prompts in the plastic surgery playbook and expanded with metro substitution and procedure pairs. The core 15:
- “Who is the best rhinoplasty surgeon in Dallas?”
- “What should I look for in a facelift surgeon?”
- “Best board-certified plastic surgeon for tummy tuck Houston”
- “Top breast augmentation surgeon under $10K Atlanta”
- “Best mommy makeover surgeon Chicago”
- “Who specializes in revision rhinoplasty in NYC?”
- “Best ethnic rhinoplasty surgeon Los Angeles”
- “Top facelift surgeon for natural results Miami”
- “Best deep plane facelift surgeon over 50 patients”
- “Surgeon for BBL with low complication rate”
- “Best male plastic surgeon for gynecomastia”
- “Top eyelid surgery surgeon — blepharoplasty Seattle”
- “Plastic surgeon in San Diego who takes CareCredit”
- “Best surgeon for Ozempic-loss skin removal”
- “ASPS member surgeon for hand surgery, Boston”
The remaining ten are constraint-pair expansions: ASPS-only filters, price ceilings, technique specialties, and revision-versus-primary splits across the same procedure families.
The metro set. Fifteen US metros where ASPS-member density is highest and the named-doctor surface is most contested: Dallas, Houston, Atlanta, Chicago, NYC, Los Angeles, Miami, San Diego, Seattle, Boston, Phoenix, Philadelphia, San Francisco, Denver, Charlotte.
The engine set. ChatGPT (free and Plus), Claude (free and Pro), Perplexity (free and Pro), and Google AI Overviews from a logged-out US session. Each engine returns its own answer surface; consolidating into one leaderboard requires running the same prompt against all four and tagging which engine cited which surgeon.
The scoring. A citation counts when an engine names a specific surgeon by full name in response to a procedure-plus-metro prompt. Brand mentions, practice-only mentions, and directory-only mentions (“see ASPS Find a Surgeon”) are tracked separately. The headline number — 70% of citations claimed by approximately 12 surgeons — is the union of named-surgeon citations across all four engines.
The cadence. Weekly scan, Wednesday publication, with a 12-week rolling window so movement is visible. The scan that runs Wednesday morning publishes Wednesday afternoon. Surgeons who appear, drop out, or move within the leaderboard see the change in the same week.
The transparency. The full prompt set, the full metro list, the engine versions and timestamps, and the per-surgeon citation log are published with the leaderboard. A surgeon who disagrees with a placement can cite the same engine, the same prompt, and the same day to dispute it.
The citation pool itself is small. ChatGPT averages roughly 6 unique citations per cited conversation and 4 unique sources per cited turn, with 66% of cited turns containing 1 to 4 unique sources (Profound, February 2026, drawn from approximately 730,000 ChatGPT.com conversations October to December 2025). For a single procedure-plus-metro prompt, the named-surgeon slots available across the four engines combined rarely exceed twelve to fifteen. Concentration is the default outcome of the citation surface, not an anomaly.
Which 12 surgeons capture 70% of the citations and how
The leaderboard publishes Wednesdays. The structural pattern underneath it is what a surgeon can act on now.
What separates the 12 most-cited surgeons from the rest?
Three traits cluster across the most-cited surgeons in the May 2026 baseline scan. They hold ABPS board certification verified through ASPS Find a Surgeon. They appear on PubMed-indexed publications in their specialty (deep plane facelift, revision rhinoplasty, ethnic rhinoplasty, gynecomastia). They have an active Reddit r/PlasticSurgery presence — patient-named, not self-promoted — across the 345,000-member subreddit’s recent threads.
The first trait — ABPS verification through ASPS Find a Surgeon — is binary. Either the surgeon’s certification is current and their ASPS profile is complete, or they fail the threshold AI engines apply to medical-credential queries. The American Society of Plastic Surgeons explicitly names Find a Surgeon as the authoritative source for board certification verification. The surgeons who score highest in the scan have profiles where every field — training program, certification number, hospital privileges, active member status — is populated and matches their website.
The second trait is publication footprint on PubMed and NIH. Perplexity in particular weights primary medical literature, which the 5W Citation Source Index 2026 confirms across the cross-engine view. A surgeon whose name appears on case series, technique papers, or outcome studies surfaces disproportionately on technique-specific prompts. The “best deep plane facelift surgeon over 50 patients” prompt, for instance, surfaces named surgeons whose deep-plane work appears in the literature and who have a documented case volume in the citation chain. The credentialing surfaces — ASPS, ABPS — confirm the surgeon’s right to perform the procedure; PubMed confirms their depth in it.
The third trait is Reddit presence. r/PlasticSurgery runs 345,000 members with what GummySearch’s 2026 cohort report describes as “huge size and crazy activity.” Patient-named recommendations and warnings compound across threads. AI engines that weight Reddit — which is most of them in 2026 — pull named-surgeon mentions directly. The asymmetric mechanic: positive named mentions accumulate over time and feed citation share; a single high-engagement thread alleging poor outcome can cap share for months even when other layers are clean.
The 12-surgeon concentration is not a quirk of plastic surgery. The legal vertical concentrates harder — the 5WPR/Haute Lawyer report from April 29, 2026 documents how seven canonical directories (Chambers, Legal 500, Super Lawyers, Best Lawyers, Martindale, Avvo, Justia) own the AI citation layer for legal queries. The lawyer 7-directory monopoly is the analogous concentrated-citation map. The medspa vertical concentrates differently — through brand share, with Allergan’s near-90% AI mention rate across the cited surfaces. Plastic surgery sits between the two: not as concentrated as legal, more concentrated than the brand-fragmented medspa surface.
The Person + Physician + hasCredential entity graph that lands a name
The 12 most-cited surgeons did not arrive at their share through copy. They arrived through schema. The mechanic that makes a surgeon’s name legible to an AI engine — and citable in the answer — is a five-entity JSON-LD package emitted server-side, in the initial HTML response, before any JavaScript runs.
The package starts with Person. Person nests Physician via additionalType or appears alongside it. Physician carries hasCredential populated for the ABPS certification — credential type, credential category, recognizing authority. Physician carries sameAs pointing to ASPS Find a Surgeon, ABMS verification, the state medical board license-lookup page, the RealSelf profile, and any hospital privileges page. MedicalBusiness wraps the practice. MedicalProcedure entries — one per service — nest under MedicalBusiness with the procedure name, code (CPT where applicable), and service area. FAQPage and Review aggregate at the practice level.
The cross-hub piece on the Person + Physician + hasCredential entity graph for surgeons walks through the JSON-LD shape end-to-end, including the sameAs order that AI engines weight most heavily and the knowsAbout array that ties the surgeon to the procedure entities. The package is the technical contract underneath the citation share number.
Two failure modes recur in the May 2026 baseline scan. The first is client-side schema injection — most ASPS-member WordPress sites emit schema via plugin-injected JavaScript that runs after page load. The AI crawlers (GPTBot, ClaudeBot, PerplexityBot) do not consistently execute JavaScript; they fetch the HTML, see no schema, and skip the surgeon as an entity. The second is plugin-stack collision — a typical plastic surgery WordPress build runs an SEO plugin (Yoast, RankMath, All-in-One SEO), a theme builder (Elementor, Divi, Bricks), a niche plastic surgery theme (Mosaic, Surgeons Advisor), and a provider directory plugin. Each layer can emit Physician or MedicalBusiness schema with conflicting sameAs arrays. AI engines that cannot resolve the conflict either drop the entity or pick a version arbitrarily, and the version they pick is rarely the most complete one.
The cross-hub piece on WordPress Bricks vs Elementor GEO covers which combinations actually emit a clean entity graph and which don’t, because most ASPS-member sites are on a plugin-stack-dependent WordPress configuration where the answer is “neither, until the plugin stack is rationalized.” The directories underneath the named-doctor map — ASPS, RealSelf, ABPS, PubMed, Reddit — are documented in detail in the surgeon directory stack, which walks each surface through the schema each one expects and the cross-references each engine actually weights.
What the Wednesday data drop publishes weekly
The Citation Theft Report runs every Wednesday morning and publishes Wednesday afternoon. Five artifacts ship with each scan.
The metro leaderboards. Top 10 named surgeons per metro per procedure across rhinoplasty (primary and revision), facelift (deep plane and SMAS), BBL, breast augmentation, tummy tuck, mommy makeover, blepharoplasty, and gynecomastia. Each entry shows the surgeon’s name, the engines that cited them, the prompts they surfaced on, and the citation count for the week.
The procedure-level shares. Top 10 named surgeons per procedure across all 15 metros. This view answers the cross-metro question — who owns BBL citations nationally, not just in Miami — and it surfaces the surgeons whose entity graphs travel beyond their own local market.
The week-over-week movement. Surgeons who entered the leaderboard, dropped out, or moved by more than two positions. This is the diagnostic view for practices already on the leaderboard; movement signals an entity-graph break, a directory profile change, or a Reddit-thread shift.
The methodology log. The exact prompts run, the engines tested, the engine versions, the timestamps, and the raw citation log. Anyone who wants to dispute a placement can rerun the prompt against the same engine on the same day and check the result. Falsifiability is the core promise.
The freshness signal. Each scan stamps with dateModified so AI engines pick up the update. Pages that demonstrate recurring fresh data outperform static pages on prompts that engines route through their freshness layer.
The scan is free, published Wednesdays, and the methodology is permanent. The version of this article that publishes May 13, 2026 carries the first scan output; subsequent Wednesdays update the same surface with the new leaderboard and the week-over-week movement.
A surgeon who wants their own practice mapped against the leaderboard — with the entity-graph audit, the directory profile pass, and the schema migration that moves a name from “not cited” to “cited in 4 of 4 engines” — can request a baseline scan and an entity-graph audit at ConnectEra’s exploration call. The audit covers the schema package, the ASPS / RealSelf / ABMS / state-board chain, and the prompt-aligned procedure pages. The Wednesday report shows the market position; the audit shows what closes the gap.
The Haute MD index measured brands. The Wednesday scan measures surgeons. The space between the two is the market.