P2 · Industry Playbooks Industry Playbook

Plastic surgeon AI visibility playbook: named-doctor citation share for $4-15K cases in 2026

ASPS Find a Surgeon dominates discovery; RealSelf and PubMed share authority. Named-doctor citation is wider than brand citation, and the 15 prompts that move $4-15K rhinoplasty, BBL, and facelift cases surface surgeons.

By Billy Reiner Published Updated May 13, 2026 20 min read

ASPS reports an average breast augmentation surgeon fee of $4,875 with surgical procedures spanning $3,000 to $15,000. ASPS Find a Surgeon, RealSelf with 10,000 aesthetic providers across 115 countries, and PubMed are the citation authority surfaces. Plastic surgery's named-doctor citation gap runs wider than the medspa brand gap because the Haute MD 2026 index stops at brand level.

The first thing every plastic surgery website loses to AI engines is its identity. Not its design, not its before-and-after gallery, not its trust signals — its identity. The site loads, the photos render, the CTA fires. The structured data that tells ChatGPT, Claude, Perplexity, and Google AI Overviews who the surgeon is, what they are board-certified for, and what procedures they perform never makes it into the initial HTML response.

That gap is why brand-level citation studies miss plastic surgeons. The Haute MD/5WPR Medical Aesthetics AI Visibility Index released April 25, 2026 ranked Botox, Juvéderm, CoolSculpting, SkinCeuticals, and Morpheus8 in its top five across ChatGPT, Claude, Perplexity, and Google AI Overviews. It measured products. The buyer who types “best deep plane facelift surgeon over 50 patients” is not searching for a product. They are searching for a person.

This playbook covers the five authority surfaces AI engines actually use to name surgeons, the 15 prompts that move $4-15K cases, the entity graph that survives a plugin-stack WordPress build, and the regulatory hooks that turn compliance content into citation share.

Why brand-level citation studies miss plastic surgeons

Brand-level studies measure products. Plastic surgery procedures are delivered by a specific licensed individual whose name, board certification, and case volume are the buying signal. The Haute MD 2026 index stopped at Morpheus8, Botox, and Juvéderm. The named-doctor citation gap that opens underneath that ceiling is wider than the brand gap medspas face — because no comparable index has measured it yet.

The buyer-side data confirms the gap. ASPS reports an average breast augmentation surgeon fee of $4,875, with all-in patient pay ranging $6,000 to $14,000. Surgical procedures span $3,000 to $15,000. Minimally invasive treatments run $530 to $1,800. Noninvasive fat reduction is up 77% versus 2019; neuromodulator injections are up 73%; hyaluronic acid fillers are up 70%. The 2024 ASPS Statistics Report counted 306,196 breast augmentations alone in a single year. None of that volume converts on a generic page about “what is rhinoplasty.” It converts on a page that names a specific surgeon, lists their case count, and emits a clean entity graph back to ASPS, ABMS, RealSelf, and the state medical board.

The fact that no published 2026 study has measured named-doctor citation share is the opportunity. Haute MD/5WPR ranked brands. Metricus ranked medspa chains. The 5W Citation Source Index 2026 ranked the 50 websites AI engines reach for. None of them ran the prompt set that asks for a specific human surgeon by metro and procedure. That data — published Wednesday, with methodology and metro list intact — is what shifts the citation surface.

The 5 authority surfaces that name surgeons

AI engines do not pull surgeon names from thin air. They pull them from a small, durable set of authority surfaces that survive across ChatGPT, Claude, Perplexity, and Google AI Overviews. The 5W Citation Source Index 2026 confirmed Reddit as the #1 cross-LLM citation source overall, with PubMed and primary medical literature heavily weighted on Perplexity. For plastic surgery specifically, five surfaces matter.

What are the 5 authority surfaces AI uses to name surgeons?

ASPS Find a Surgeon, RealSelf, BoardCertifiedDocs/ABPS, PubMed/NIH, and Reddit r/PlasticSurgery. Each carries a distinct trust signal — board certification, peer-reviewed research, verified procedure reviews, candid patient discussion — and each surfaces in different prompt families. AI engines cross-reference at least two of these surfaces before naming a specific surgeon in a citation, which is why a single-directory profile rarely lifts share.

ASPS Find a Surgeon is the explicit gold standard. ASPS itself names the tool as the authoritative source for verifying board certification through The American Board of Plastic Surgery. When a prompt asks for a “board-certified plastic surgeon” — and the prompt set below shows how often it does — Find a Surgeon is the surface AI engines lean on first. The lift requires that your ASPS profile be complete: training history, board certifications, hospital privileges, and active member status all need to be filled, not stubs.

RealSelf carries 10,000 aesthetics providers across 115 countries after its 2025 rebrand and the AI features its 2026 product roadmap added. The platform is also where the synthetic-review problem lives — the peer-reviewed analysis of 9,000 RealSelf reviews split 64.3% authentic versus 35.7% AI-generated. AI engines that index RealSelf are aware of the dilution; the surgeons whose reviews verify cleanly carry disproportionate weight in citation outputs.

BoardCertifiedDocs and ABPS are the verification surfaces that confirm what ASPS asserts. They are not yet surfaced as named primary citations in published 2026 studies — flagged as a hypothesis in the underlying research base — but the entity-graph mechanic still works: AI engines that traverse a Physician schema with sameAs to both ASPS and ABMS treat the credential as confirmed at a level a single profile cannot match.

PubMed and NIH carry the medical authority Perplexity in particular rewards. A surgeon whose name appears on peer-reviewed publications — case series, technique papers, outcome studies — surfaces in citations on prompts where the patient is asking for technique-specific expertise: “Best deep plane facelift surgeon over 50 patients” surfaces named surgeons whose deep-plane work is in the literature.

Reddit r/PlasticSurgery runs 345,000 members with what GummySearch’s 2026 cohort report describes as “huge size and crazy activity.” It is the candid layer — the place where patients post recovery photos, name surgeons they recommend or warn against, and debate techniques. AI engines that weight Reddit (which is most of them in 2026) pull named-surgeon mentions directly. The lift is asymmetric: positive named mentions compound; a single thread alleging poor outcome can cap share for months.

The cluster piece on the directory mechanics — the surgeon directory stack — walks through how each surface emits its authority signal, what schema each one expects, and how the cross-references actually compute when an AI engine assembles a “best surgeon for X in Y” answer.

The 15 prompts that move $4-15K cases

Patient prompts in 2026 are specific. They name procedures, metros, and constraints. PRO Star SEO’s 2026 plastic-surgery search analysis and the cross-platform query data underneath the Aesthetic Surgery vertical converge on a tight prompt set — short, geo-bound, and constraint-driven. These are the 15 prompts that surface named surgeons in $4-15K decisions.

What prompts do plastic surgery patients actually type into AI in 2026?

Patients type metro-specific surgeon questions like “Who is the best rhinoplasty surgeon in Dallas,” “Top breast augmentation surgeon under $10K Atlanta,” and “Best deep plane facelift surgeon over 50 patients.” The prompt structure is procedure plus geography plus constraint — board certification, price ceiling, technique specialty, payment method, or case volume. AI engines answer with named surgeons, not with techniques or with marketing pages.

The verified prompt set:

  1. “Who is the best rhinoplasty surgeon in Dallas?”
  2. “What should I look for in a facelift surgeon?”
  3. “Best board-certified plastic surgeon for tummy tuck Houston”
  4. “Top breast augmentation surgeon under $10K Atlanta”
  5. “Best mommy makeover surgeon Chicago”
  6. “Who specializes in revision rhinoplasty in NYC?”
  7. “Best ethnic rhinoplasty surgeon Los Angeles”
  8. “Top facelift surgeon for natural results Miami”
  9. “Best deep plane facelift surgeon over 50 patients”
  10. “Surgeon for BBL with low complication rate”
  11. “Best male plastic surgeon for gynecomastia”
  12. “Top eyelid surgery surgeon — blepharoplasty Seattle”
  13. “Plastic surgeon in San Diego who takes CareCredit”
  14. “Best surgeon for Ozempic-loss skin removal”
  15. “ASPS member surgeon for hand surgery, Boston”

Three things sit underneath this list that the brand-level studies miss. First, the prompts are explicitly named-doctor queries. None of them ask “What is rhinoplasty?” — that is a content marketing prompt the patient ran two months ago. By the time they are typing prompt 1, they are within 60 days of booking a consultation. Second, the constraints — under $10K, takes CareCredit, low complication rate, over 50 patients, ASPS member — are conversion-stage filters. They tell the AI engine which entity-graph attributes to weight: pricing, credential, case volume, accepted payment. Third, the metros are not random. Dallas, Houston, Atlanta, Chicago, NYC, Los Angeles, Miami, Seattle, San Diego, Boston — these are the metros where ASPS-member density is highest and where named-doctor citation share is most contested.

The Wednesday data drop ConnectEra runs against this prompt set is the named-doctor citation share by metro across 15 markets. The Haute MD index covered 25 brands. Our scan covers approximately 300 named surgeons across 15 metros and 8 procedure families — rhinoplasty, facelift, BBL, breast augmentation, tummy tuck, mommy makeover, blepharoplasty, gynecomastia. The output is the first published estimate of how concentrated the named-doctor surface actually is — a piece the vertical citation playbooks hub frames against medspa, dental, legal, and real-estate analogs to show that named-doctor citation is wider than brand citation in every direction.

The methodology is documented in detail in the named-doctor citation map, which publishes the 15-metro scan, the per-procedure shares, and the named-doctor leaderboards each Wednesday during the data-drop cadence.

State board, ABMS, and what AI actually cites

Buyers verify board certification before they book. AI engines do too — at least the ones that crawl with the entity-graph awareness their 2026 product updates added. The verification chain that gets cited is not arbitrary.

How do AI engines verify a plastic surgeon's credentials?

AI engines cross-reference three layers: the surgeon’s site (Physician schema with hasCredential and sameAs), the ASPS Find a Surgeon profile (which asserts ABPS board certification), and the ABMS-administered ABPS verification page (which confirms it). State medical board listings sit underneath as a fourth layer for license status and disciplinary history. A break in any layer caps citation share; a clean chain compounds it.

The chain has four anchors, and each carries a different signal weight.

The American Board of Plastic Surgery, administered through the American Board of Medical Specialties, is the credential that distinguishes a board-certified plastic surgeon from a physician with adjacent training. AI engines that recognize the credential — and most of them do, because ABMS verification is a primary signal in the medical-credential layer of every major engine’s training data — treat ABPS confirmation as load-bearing.

ASPS membership requires ABPS certification. The ASPS Find a Surgeon profile asserts the member’s board certification, training, and active status. When the profile and the surgeon’s website agree on every field — name spelling, degrees, training program, active practice address — the AI engine receives a confirmed identity it can cite. When they disagree, citation share drops because the engine cannot resolve the entity cleanly.

State medical board listings carry the license-status and disciplinary-history signal. Florida, Texas, California, and New York publish these under license-lookup APIs that are crawled. A surgeon under active investigation, or one whose license status is anything other than current and unrestricted, is invisible to the AI engine even if every other layer is clean.

Hospital privileges add the institutional anchor. A surgeon with active staff privileges at a recognized hospital — and the privilege page on the hospital’s site emits the right schema — earns an additional Hospital sameAs reference that strengthens the entity graph.

The mechanism that ties these four anchors together is the entity graph in the page’s JSON-LD. A Physician schema with hasCredential populated for the ABPS certification, sameAs arrays pointing to ASPS, ABMS, the state board, and the hospital privileges page, and memberOf for ASPS — emitted server-side, in the initial HTML response, before any JavaScript runs — is the package that survives a crawl by ClaudeBot, PerplexityBot, or GPTBot. The same package, injected client-side after JavaScript executes, is invisible. This is the entity-graph pattern Person + Physician + hasCredential entity graph walks through end-to-end, including the sameAs order that AI engines weight most heavily.

The state-board layer also doubles as a regulatory citation hook, which the next section covers.

Florida HB 1429, Texas SB 378, and FTC review rules as citation hooks

Compliance content is citation content. AI engines weight pages that anchor a regulatory claim to a primary source — a state law, an FTC guide, a peer-reviewed paper. Plastic surgery in 2026 has three live regulatory hooks that turn straightforward compliance writing into a Wednesday data drop.

Which 2026 regulations create plastic surgery AI citation hooks?

Florida HB 1429 grants registration revocation power and an expedited injunction route against non-compliant practices. Texas SB 378 closes the medspa loophole — qualifying medical license-holders only, applies to surgeons performing injections. The FTC’s 2023-revision Endorsement Guides remain active in 2026 and cover AI-generated patient reviews. Each anchors a citable enforcement narrative; pages that quote and link the primary source rank for the regulatory prompts patients run before booking.

Florida HB 1429 is the broadest of the three. The law creates an expedited injunction route against medspas and plastic surgery practices that refuse inspection, with immediate registration revocation as the enforcement teeth. The Florida Healthcare Law Firm and AmSpa coverage from early 2026 walk through the injunction mechanism. For surgeons in Florida, an HB 1429 compliance posture — with the law cited and linked in a “patient-safety” page or FAQ — anchors regulatory queries on Google AI Overviews and Perplexity that practices in other states cannot answer.

Texas SB 378 closes the medspa loophole by limiting injections to qualifying medical license-holders regardless of practice tenure. The law applies to anyone doing injections, which includes plastic surgeons who deliver Botox, fillers, or neuromodulators in their practice. The American Academy of Aesthetics Texas walks through the qualification rules; pages that cite SB 378 by name and number, with the link to the bill text, carry the regulatory authority signal AI engines weight.

FTC Endorsement Guides were revised in 2023 and remain active in 2026. The relevant 2026 development is that the FTC has clarified — through Arnold & Porter and similar enforcement-tracker coverage — that virtual or AI-generated endorsers are held to the same standard as human ones. The peer-reviewed 64.3%-authentic finding from the 9,000-review RealSelf analysis becomes the citable evidence base for a patient-protection page that explains how an honest practice verifies its own reviews. The page is not defensive content; it is offensive content. It ranks because it answers a question — how do I know if a surgeon’s reviews are real? — that no AI-generated review farm will ever be willing to answer.

The pattern across all three is identical: link the primary source, quote the operative clause, walk through the practice-level implication. The page becomes part of the AI engine’s authority graph because it is the kind of page an authority graph is built to surface.

The Physician + MedicalProcedure + Hospital entity graph

The schema package that carries a plastic surgery practice into the citation surface has five entity types that need to nest correctly. On WordPress with a plugin stack, the nesting routinely fails. On Wix Studio, the cap on schema field length forces the package to truncate. On Squarespace, the editor emits some types and refuses others. On a static-rendered build — Astro, Next.js with full SSR, or a hand-rolled platform — the package emits cleanly and the citation graph compounds.

What schema does a plastic surgery practice need for AI citation?

Five JSON-LD entity types: MedicalBusiness for the practice, Physician (one per surgeon) with hasCredential and sameAs, MedicalProcedure (one per service) nested under the practice, Hospital for any surgery-center affiliation, and FAQPage for patient-question content. Plus Review entities aggregated under the practice. Wix and Squarespace cap or truncate this package; WordPress plugin stacks routinely emit conflicting versions. A clean static build emits all five server-side.

The five-entity package is what an AI engine reads to answer “best rhinoplasty surgeon in Dallas.” It needs to know the practice exists (MedicalBusiness), who the surgeon is (Physician with the right credentials), what they do (MedicalProcedure with the right name and service area), where they operate (Hospital affiliation, address, geo coordinates), and what patients say (Review aggregated, FAQPage answered).

On Wix Studio, the JSON-LD field caps at roughly 8,000 characters and the platform deduplicates repeated @type entries. A practice with two surgeons, twelve procedures, and a hospital affiliation routinely exceeds the cap before the FAQPage layer is added. The platform also injects JSON-LD client-side, so the AI crawler — which does not run JavaScript — fetches the HTML, sees no schema, and moves on. This is the same mechanic the 12-platform 2026 leaderboard maps in detail; plastic surgery practices on Wix Studio sit underneath the same ceiling every Wix Studio site sits under.

On Squarespace 7.1, the editor emits some types but caps custom JSON-LD at the page level and refuses canonical edits — the canonical trap that ships with every 7.1 site. The 34% Core Web Vitals pass rate and 3.6-second median mobile LCP add a second ceiling underneath the schema one.

WordPress is where most ASPS-member sites live, and it is also where the schema collisions concentrate. A typical plastic surgery WordPress build runs an SEO plugin (Yoast, RankMath, or All-in-One SEO), a theme builder (Elementor, Divi, Bricks), a niche plastic-surgery theme (Mosaic, Surgeons Advisor), and a provider directory plugin. Each of those layers can emit Physician or MedicalBusiness schema. The result is conflicting versions of the same entity, with different sameAs arrays, different credential fields, and different procedure lists. AI engines that cannot resolve the conflict either drop the entity or pick one version arbitrarily — and the version they pick is rarely the most complete one. The cluster piece on WordPress Bricks vs Elementor GEO walks through which combinations actually emit a clean entity graph and which don’t, because most ASPS-member sites are on a plugin-stack-dependent WordPress configuration where the answer is “neither, until the plugin stack is rationalized.”

The technique pillar that ties this together — the entity-graph stack — covers the JSON-LD output rules at the platform level and the structural reasons AI engines cite some sites and skip others. Plastic surgery is one of the cleanest case studies: the entity graph is rich (Person, Physician, MedicalBusiness, MedicalProcedure, Hospital, Review, FAQPage), the credentials are externally verifiable (ABPS, ASPS, state board), and the buyer queries are explicit (named-doctor, metro-specific, constraint-bound). When the schema package emits cleanly, citation share follows. When it doesn’t, no amount of content marketing closes the gap.

The Wednesday data drop: named-doctor citation share across 15 metros

The data layer ConnectEra runs against the plastic surgery vertical is the first published estimate of named-doctor citation share. The Haute MD/5WPR index measured brand share — Botox, Juvéderm, CoolSculpting, SkinCeuticals, Morpheus8 in the top five. The Metricus index measured medspa-chain share — Allergan/AbbVie at 90%-plus, RealSelf at 75%, Ideal Image at 65%. Neither index ran the prompt set that asks AI engines to name a specific surgeon.

What is the named-doctor citation share data drop?

A weekly scan of 15 metros and 8 procedure families against ChatGPT, Claude, Perplexity, and Google AI Overviews using the verified buyer prompt set. The output is a per-metro, per-procedure leaderboard of named surgeons by citation share. The 2024 ASPS baseline of 306,196 breast augmentations contextualizes the volume the named-doctor surface decides; the share itself has not been published before.

The methodology has six pieces. First, the prompt set: the 15 verified prompts above, expanded with metro substitution to cover Dallas, Houston, Atlanta, Chicago, NYC, Los Angeles, Miami, San Diego, Seattle, Boston, Phoenix, Philadelphia, San Francisco, Denver, and Charlotte. Second, the engine set: ChatGPT (free and Plus), Claude (free and Pro), Perplexity (free and Pro), and Google AI Overviews from a logged-out US session. Third, the procedure set: rhinoplasty (primary and revision), facelift (deep plane and SMAS), BBL, breast augmentation, tummy tuck, mommy makeover, blepharoplasty, gynecomastia. Fourth, the scoring: a citation counts when an AI engine names a specific surgeon by full name in response to a procedure-plus-metro prompt. Brand mentions and practice-only mentions are tracked separately. Fifth, the cadence: weekly scan, Wednesday publication, with a 12-week rolling window so movement is visible. Sixth, the leaderboard: top 10 named surgeons per metro per procedure, plus the procedure-level share for the top 10 across all metros.

The 2024 ASPS baseline of 306,196 breast augmentations gives the volume context. At an average all-in patient pay of $6,000 to $14,000, that’s a $1.8 billion to $4.3 billion procedure category in a single year. The named-doctor citation share decides which surgeons capture the AI-routed slice of that demand — and AI-routed traffic in 2026 is the only segment growing.

The data drop sits inside a hub system. The pillar piece — this article — frames the scan and the methodology. The cluster piece on the named-doctor map publishes the actual scan output each Wednesday. The cluster piece on the directory stack walks through how each authority surface contributes to a surgeon’s score. The cross-hub piece on the entity graph covers the schema layer that makes a surgeon’s site readable in the first place. The patient who runs the prompt sees the surgeon. The surgeon’s competitors see the leaderboard. The market sees the gap.

What lives in this hub: the 2 plastic-surgery clusters

The plastic surgery hub publishes two cluster pieces beneath this pillar. Each runs at the pillar’s specificity level — original data where it exists, primary-source citation where it doesn’t — and each carries the same entity-graph and citation-surface logic at the cluster level.

The named-doctor citation theft report is the data drop. It publishes the 15-metro, 8-procedure scan output, with named-surgeon leaderboards updated weekly, the methodology documented in line, and the per-metro citation-share concentration ratios that frame the market structure. The piece is the offensive document — the one that names which surgeons currently own which prompts and quantifies the gap to runners-up.

The RealSelf and ASPS citation stack is the directory mechanics. It walks through ASPS Find a Surgeon, RealSelf, BoardCertifiedDocs, ABPS, PubMed, and Reddit r/PlasticSurgery in detail, with the schema each surface expects, the cross-references each engine actually weights, and the order in which a clean entity graph populates sameAs to maximize citation lift. The piece is the defensive document — the one that walks a practice through its own authority surfaces and identifies the gaps before a competitor closes them.

Together, the three pieces — pillar, named-doctor data drop, directory stack — give a plastic surgery practice the offensive and defensive layers of a 2026 AI visibility posture. The Florida HB 1429 and Texas SB 378 hooks turn into Wednesday compliance content. The Person + Physician + hasCredential schema package turns into a deployment ticket. The 15-metro citation share turns into a quarterly market-position read.

What this looks like as work

The deployment is not a content sprint. It’s a five-layer build: the schema package, the directory profile audit, the regulatory citation pages, the prompt-aligned procedure pages, and the Wednesday data drop cadence. Two of those layers are static (schema, directory profiles); two are quarterly (regulatory citation, procedure pages); one is weekly (data drop).

The first move is the schema audit. Most ASPS-member WordPress sites emit four or more conflicting versions of Physician or MedicalBusiness schema. The audit identifies which plugins are emitting which versions, picks the one closest to canonical, deletes the others, and rebuilds the package server-side so it ships in the initial HTML response. On Wix Studio, the only honest move is migration; the cap and the client-side rendering can’t be configured around. On Squarespace, the canonical trap and the schema cap force the same conclusion. On WordPress with a clean stack — or on Webflow at the luxury tier — the configure-don’t-leave path works.

The second move is the directory profile pass. ASPS, RealSelf, ABMS/ABPS, the state medical board, and any hospital privileges pages each get audited for completeness and consistency with the practice’s site. Name spelling, training program, board certification numbers, active-status fields, address, and procedure list need to agree across every surface. The sameAs array in the practice’s Physician schema is rebuilt to match.

The third move is the regulatory citation pages. Florida HB 1429, Texas SB 378, the FTC Endorsement Guides on synthetic reviews — each gets a page that quotes the operative clause, links the primary source, and walks through the practice-level implication. The pages do not target buyer queries; they target patient-protection queries that compound trust on the prompts that do.

The fourth move is the procedure pages. One page per high-volume procedure, structured as the answer to the procedure-plus-metro prompt. The H2 stack includes the answer capsule (40-60 words), the technique-specific FAQ (FAQPage schema), the entity-graph block (Physician + MedicalProcedure nested under MedicalBusiness), and the constraint coverage (price ranges, credential, payment, case volume) that the verified prompt set demands.

The fifth move is the data drop cadence. The Wednesday scan publishes against the same 15-metro, 8-procedure prompt set, with the practice’s own surgeons tracked as a control. The leaderboard becomes the market-position read; movement on it becomes the quarterly report. Practices that publish a credible 2026 named-doctor scan — methodology open, prompt set open, leaderboard open — earn citation share from the same engines they are measuring, because the data itself becomes the kind of primary source AI engines reach for.

The five layers compound. A practice with a clean schema package, complete directory profiles, primary-sourced regulatory pages, prompt-aligned procedure pages, and a published weekly data drop sits at a different citation surface than a practice with any one layer missing. The Haute MD index measured brands. The Wednesday scan measures surgeons. The gap between the two is the market.

Frequently asked questions

Why does Haute MD's index stop at brand level?
The Haute MD/5WPR Medical Aesthetics AI Visibility Index released April 25, 2026 ranked 25 brands across ChatGPT, Claude, Perplexity, and Google AI Overviews using 60-plus queries. It measured Botox, Juvéderm, CoolSculpting, SkinCeuticals, and Morpheus8 — products and product lines, not the surgeons who deliver them. The index was designed for the $22 billion global aesthetics market where brand sponsorship pays for the audit. Named-doctor share would require a different sponsor and a different prompt set, which is the gap an honest 2026 study can fill.
Does my ASPS profile help my AI citation share?
Yes, but only if your profile is complete and your website confirms the same identity. ASPS Find a Surgeon is the gold-standard authority surface AI engines reach for when prompts include phrases like board-certified or ASPS member. The lift compounds when your site emits Person and Physician schema with sameAs back to the ASPS profile, the ABMS verification page, your state board listing, and any RealSelf or hospital affiliation. Without the schema, the ASPS profile sits orphaned and the AI engine has no graph to traverse.
Can I cite ChatGPT-generated patient reviews?
No. A peer-reviewed analysis of 9,000 RealSelf reviews found 64.3% authentic versus 35.7% AI-generated, and the FTC's revised Endorsement Guides hold synthetic testimonials to the same standard as human ones. Buying GenAI testimonials creates a fake-review enforcement risk that compounds with state medical board outcome-claim rules. The patient-protection narrative — what authentic review verification actually looks like in 2026 — is publishable and citable; the synthetic reviews themselves are not.
Which platform are most ASPS-member sites on, and is that a problem?
There is no surgeon-specific BuiltWith report for 2026, but industry default is WordPress with niche plastic-surgery themes (Mosaic, Surgeons Advisor) for content sites, some Webflow at the luxury tier, and Squarespace for solo practitioners. The plugin-stack-dependent WordPress configuration is where the GEO friction concentrates: schema collisions between SEO plugins, theme builders, and provider-specific plugins routinely cap what can ship in the initial HTML response. ConnectEra's Top-100 ASPS site audit is the workstream that quantifies the blocker rate.

Written by

Founder · ConnectEra

Billy builds AI-citable sites for practices, advisors, and B2B SaaS. Over 80 migrations in the last 18 months — every one with a live audit, a fixed price, and a 7-day rebuild.

When you're ready

Ready to be the page ChatGPT cites?

Tell us where your site is at. You get back your free growth plan — your platform blocker, your industry's citation gap, and the next move. Yours to keep, whether you hire us or not.

Get my free growth plan

Your free growth plan

Tell us where your business is at.
You get back your free growth plan — yours to keep, whether you hire us or not.