P2 · Industry Playbooks Industry Playbook

ASPS, RealSelf, PubMed: the plastic surgeon citation stack 2026

ASPS Find a Surgeon is the gold-standard authority. RealSelf covers 10,000 providers across 115 countries with AI on the 2026 roadmap. PubMed is Perplexity's medical floor. r/PlasticSurgery has 345K members.

By Billy Reiner Published Updated May 13, 2026 12 min read

ASPS Find a Surgeon is the gold-standard authority surface for plastic surgeons in 2026. RealSelf covers 10,000 aesthetic providers across 115 countries with a 2025 rebrand and AI features on the 2026 product roadmap. PubMed and NIH are Perplexity's primary medical authority. Reddit r/PlasticSurgery has 345,000 members. The Plan-listed BoardCertifiedDocs and ABPS surfaces remain hypothesis-only — not surfaced as named in any 2026 citation study.

The first thing every plastic surgery website does wrong on AI citation is treat directories like Yelp pages — populate the profile, claim the listing, move on. That model worked for Google in 2014. It does not work for ChatGPT, Claude, or Perplexity in 2026, because each of these engines retrieves from a different stack of authority surfaces and weights them differently for different prompt families.

There are four surfaces that matter for a board-certified plastic surgeon in 2026: ASPS Find a Surgeon, RealSelf, PubMed and NIH, and Reddit r/PlasticSurgery. A fifth — BoardCertifiedDocs and the ABPS verification page — was named in the original Plan but has not surfaced in any 2026 citation study. The honest 2026 stance treats it as a reinforcement target, not a primary citation surface.

This cluster sits underneath the plastic surgeon AI visibility playbook, which is the hub for the named-doctor citation strategy. It is laterally tied to the named-doctor citation theft report — the directories below quantified here are the surfaces that report measures against — and to two analogous concentration maps: the medspa Allergan citation monopoly and the seven-directory monopoly that owns every legal query. The schema architecture that makes these directory citations actually compound into your site lives at the named-credential entity graph that links to ASPS and RealSelf.

What is the 2026 plastic surgeon citation stack?

Four authority surfaces drive named-surgeon AI citation in 2026: ASPS Find a Surgeon (gold-standard, board-certified prompts), RealSelf (10,000 providers across 115 countries, procedure-level prompts), PubMed and NIH (Perplexity’s primary medical authority), and Reddit r/PlasticSurgery (345,000 members, candid-experience prompts). BoardCertifiedDocs and ABPS were Plan-listed but remain hypothesis-only — not surfaced as named in any 2026 citation study.

The numbers underneath this stack are the reason a procedure-page rebuild without a directory plan compounds nothing. ASPS reports an average breast augmentation surgeon fee of $4,875, with all-in patient pay $6,000 to $14,000. Surgical procedures span $3,000 to $15,000 across the ASPS membership (ASPS via Cape Cod Plastic Surgery 2026). The 2024 ASPS Statistics Report — still the most recent comprehensive procedural baseline — counted 306,196 breast augmentations in a single year. Noninvasive fat reduction is up 77% versus 2019. Neuromodulator injections are up 73%. Hyaluronic acid fillers are up 70% (WiFi Talents Plastic Surgery Data 2026). None of that volume converts on a generic “what is rhinoplasty” page. It converts on the prompt “best deep plane facelift surgeon over 50 patients” — and that prompt routes through the four directories below.

Why ASPS Find a Surgeon is the 2026 gold-standard surface

ASPS Find a Surgeon is the only plastic-surgery directory explicitly named as the gold-standard authoritative source in 2026. The American Society of Plastic Surgeons identifies its own Find a Surgeon tool as the verifying surface for board certification (ASPS 2026), and the prompt families that route through it are the ones with the highest commercial intent. From the verified 2026 prompt set: “Best board-certified plastic surgeon for tummy tuck Houston,” “ASPS member surgeon for hand surgery, Boston,” and “What should I look for in a facelift surgeon?” all surface ASPS-member practices first when the engine has any signal to traverse.

The mechanism is simple. ChatGPT, Claude, and Perplexity retrieve from a small, durable set of authority surfaces and cross-reference at least two before naming a specific surgeon in a citation. ASPS is the trust-signal surface — the one that disambiguates “board-certified” from the unverified self-claim that surrounds it on most cosmetic websites. The prompts that include “board-certified,” “ASPS member,” or specific procedural credentials route through the ASPS directory first.

The catch is that an ASPS profile alone does not lift citation share. The profile has to wire back into your site’s entity graph through Person and Physician schema with a sameAs chain pointing to the ASPS URL and a hasCredential block referencing ABMS or ABPS verification. The schema architecture that does that work lives at the named-credential entity graph; the ASPS profile is the citable anchor that graph traverses to.

The 2026 prompt families that route through ASPS — “best board-certified plastic surgeon for [procedure] [metro],” “ASPS member surgeon for [sub-specialty], [metro],” “what should I look for in a [procedure] surgeon,” “how do I verify a plastic surgeon’s credentials” — are the prompts with the cleanest commercial intent. A buyer searching for “board-certified” is past the awareness layer and inside the consideration set (PRO Star SEO 2026).

RealSelf’s 2025 rebrand and what 2026’s AI features change

RealSelf is the procedure-level citation surface. The platform covers 10,000 aesthetic providers across 115 countries, rebranded in 2025, and has AI features on its 2026 product roadmap per BeautyMatter’s 2026 omnichannel coverage. The 2025 rebrand reframed RealSelf around content velocity and provider-experience surfaces, which matters for AI citation because retrieval models reward freshness signals on profile updates and review velocity.

The procedure-level prompt families that route through RealSelf are distinct from the ASPS prompts. From the verified 2026 set: “Top RealSelf-rated injector Houston,” “Best ethnic rhinoplasty surgeon Los Angeles,” “Top facelift surgeon for natural results Miami,” and “Surgeon for BBL with low complication rate” all surface RealSelf as a primary or secondary citation. The signal RealSelf carries is review density and procedure-specific outcome reporting — different from the credential-verification signal ASPS carries.

The 64.3% authentic versus 35.7% AI-generated review classification rate from the peer-reviewed ScienceDirect analysis of 9,000 RealSelf reviews is the freshness signal Perplexity in particular respects. Perplexity ties every claim to a specific source in 78% of complex research questions versus ChatGPT’s 62% (Whitehat SEO Qwairy analysis 2026), and the source-binding behavior favors profiles with verifiable review density over profiles with cosmetic-volume density. A surgeon with a populated RealSelf profile, before-and-after evidence, and authentic Q&A activity ranks above a surgeon with an empty or stub profile, even if the second surgeon’s website is more polished.

The 2026 AI roadmap is the watch-this layer. RealSelf has not published a feature spec yet, but the BeautyMatter coverage frames the roadmap around AI-assisted patient matching and procedure-search refinement. When that ships, the directory becomes the kind of pre-vetted retrieval surface that AI engines treat the way they currently treat Wikipedia — and the surgeons who already have populated, schema-aligned profiles will compound the lift while the latecomers play catch-up.

The architectural play is straightforward. Populate the RealSelf profile to the highest tier the practice can sustain. Emit AggregateRating on your own site that cites the same review counts visible on RealSelf — not made-up numbers. Wire sameAs from your Physician schema to the RealSelf provider URL. The two surfaces reinforce each other. The engine retrieves both and treats them as the same entity. That is the lift.

PubMed and NIH for Perplexity-specific authority

PubMed and NIH are the Perplexity-weighted surface, and the weighting differential is large enough to move strategy. Perplexity averages 21.87 citations per response — the highest of any major AI platform — and ties every claim to a specific source in 78% of complex research questions (Authority Tech 2026). On medical-procedure queries, primary medical literature carries disproportionate weight in that retrieval mix.

The 5W AI Platform Citation Source Index 2026 — the 50-website cross-LLM ranking covering 680 million-plus aggregate citations from August 2024 through April 2026 — confirmed PubMed and NIH as Perplexity-weighted authority surfaces (5W via PR Newswire 2026). The Index also placed Reddit as the #1 cross-LLM citation source overall, which routes back to the r/PlasticSurgery surface below.

The entity-graph value of a published clinician compounds across queries. A surgeon with peer-reviewed publications in the Aesthetic Surgery Journal or Plastic and Reconstructive Surgery — wired into their own site via sameAs to ORCID, PubMed Author Page, or the relevant journal URL — wins citation lift on procedural-mechanism queries that funnel into commercial intent downstream. A surgeon ranking on “how does deep plane facelift work” because they authored or co-authored a paper on the technique converts on “top deep plane facelift surgeon over 50 patients” within the same retrieval session, because Perplexity in particular maintains source-bound conversational state.

The PubMed surface is not democratic. A surgeon without published research cannot manufacture this surface in the short term. But a surgeon with even a single peer-reviewed publication who has not wired the publication back into their site schema is leaving the citation lift on the table — that is the gap most plastic surgery websites have today.

Reddit r/PlasticSurgery as named-discussion citation surface

Reddit r/PlasticSurgery is the candid-experience surface. The subreddit has 345,000 members with what GummySearch’s 2026 community profile calls “huge size and crazy activity.” The cross-LLM citation weight for Reddit is the largest single shift the 5W Citation Source Index 2026 documented — Reddit appears as the #1 cross-LLM citation source across ChatGPT, Claude, Perplexity, Gemini, and Google AI Overviews aggregated.

The plastic-surgery-specific signal is the candid review and experience layer. Prompts like “Best ethnic rhinoplasty surgeon Los Angeles,” “Surgeon for BBL with low complication rate,” and “Best surgeon for Ozempic-loss skin removal” surface r/PlasticSurgery threads where named surgeons are recommended or critiqued by patients. The threads that name a specific surgeon — with metro and procedure context — are retrieval-rich for AI engines because they pair entity (surgeon name) with attribute (procedure, metro, outcome).

The complication is the FTC Endorsement Guides 2023 revision (still active in 2026), which holds virtual and AI endorsers to the same standard as human ones (FTC). The combined Reddit-and-RealSelf fake-review enforcement risk — the ScienceDirect study classified 35.7% of 9,000 RealSelf reviews as AI-generated — is real, and surgeons buying GenAI testimonials run both a fake-review enforcement risk and a state-medical-board outcome-claim risk at the same time.

The defensible Reddit play is patient-led, not surgeon-led. The architectural answer is to make sure the surgeon’s site reinforces the citations Reddit users are already making — clean Person and Physician schema with the surgeon’s name, sub-specialty, and metro disambiguated so the engine can match the Reddit thread to the right surgeon entity.

Why BoardCertifiedDocs is still hypothesis-only

The original Plan listed BoardCertifiedDocs and the ABPS verification page as dominant surgeon-citation directories. The 2026 research register flagged that as hypothesis only, and nothing in the published 2026 citation studies has changed the verdict. None of Profound’s ChatGPT citation analysis, the 5W Citation Source Index 2026, Metricus’s vertical audits, or the Haute MD and 5WPR Aesthetics Index named BoardCertifiedDocs or ABPS as a primary citation surface.

That does not make the surfaces useless. ABMS and ABPS verification are entity-graph reinforcement targets. A surgeon who emits Person and Physician schema with a hasCredential block referencing ABMS or ABPS verification — and a sameAs URL pointing to the relevant verification page — gives the engine a third signal to cross-reference against the ASPS profile and the surgeon’s site. The reinforcement layer makes the named-credential entity graph denser, which compounds across all four primary surfaces above.

The 2026 honest stance: treat ABMS and ABPS as schema-graph anchors, not as primary citation directories. Watch the 2026-2027 citation studies for any change in the named-source map. If a future Profound or 5W Index run starts surfacing BoardCertifiedDocs by name, upgrade it to a primary surface. Until then, the four primary surfaces above are the citation stack.

The directories above are citable on their own. They become a compounding citation engine when they are wired into your site’s entity graph through Person, Physician, MedicalProcedure, and hasCredential schema with sameAs chains to each directory profile.

The minimum viable entity graph for an ASPS-member plastic surgeon in 2026:

  • Person and Physician for the surgeon (medicalSpecialty, availableService, hasCredential)
  • hasCredential blocks for ABMS or ABPS board certification, state medical license, and ASPS membership
  • sameAs URLs to ASPS Find a Surgeon profile, RealSelf provider page, PubMed Author Page or ORCID (if published), state medical board verification, hospital affiliation, LinkedIn
  • MedicalProcedure for each surgical procedure with bodyLocation, procedureType, preparation, followup
  • FAQPage per procedure page; AggregateRating citing review counts that match the RealSelf profile

That graph runs to the edge of what any platform with a hard schema cap can deliver. Wix Studio’s 8,000-character total cap forces a choice between entity layers — the deeper analysis lives in the platform-vs-AI citation guide for 2026. The plugin-stack WordPress configuration that most ASPS-member sites run on can ship the full graph server-side without a cap, but it requires a developer to manage schema collisions between SEO plugins, theme builders, and provider-specific plugins.

The pattern matches the analogous concentration maps in adjacent verticals. Med-spas face Allergan’s 90%-plus citation share for branded queries — the Allergan citation monopoly cluster covers the brand-versus-named structure. Lawyers face seven directories that own 85%-plus of legal AI citation — the seven-directory monopoly cluster covers the productized-mastery response. Plastic surgeons face the four-directory stack above. The response is the same: build the entity graph that wires into the citation surfaces, populate the directory profiles to the highest tier the practice can sustain, and stop fighting the brand-citation layer the directories already own.

The implementation order

For an ASPS-member practice that takes 2026 AI citation seriously, the order is sequential. Reverse it and the spend compounds in the wrong direction.

  1. Audit the platform. Squarespace 7.1 and the Wix Studio 8,000-character schema cap are ceilings on every other fix. Plan migration before content.

  2. Populate ASPS Find a Surgeon to completion. Sub-specialty, hospital affiliations, board certifications, all listable procedures. The trust-signal anchor for the entire entity graph.

  3. Populate the RealSelf provider profile. Procedure list, state-board-compliant before-and-after evidence, Q&A cadence. Match the review counts on your AggregateRating schema exactly.

  4. Build the named-surgeon entity graph. Person plus Physician plus hasCredential plus a sameAs chain to ASPS, RealSelf, ABMS or ABPS verification, state medical board, hospital affiliation, LinkedIn.

  5. Wire PubMed if published. ORCID and PubMed Author Page in the sameAs chain. If unpublished, prioritize a co-authored case report before the next audit cycle.

  6. Procedure-page architecture. One MedicalProcedure schema block per surgical service with bodyLocation, procedureType, preparation, and followup populated. FAQPage on each procedure page.

  7. Measure citation share, not Google traffic. Run the verified 2026 prompt set monthly through ChatGPT, Claude, and Perplexity. The number that matters is named-surgeon citation per metro-procedure prompt.

The compounding math runs against ASPS-reported pricing — $4,875 average breast augmentation surgeon fee, $3,000 to $15,000 across surgical procedures. One AI-cited query that surfaces the surgeon by name on a board-certified prompt converts at four to five figures. The four surfaces — ASPS Find a Surgeon for credential trust, RealSelf for procedure-level citation, PubMed and NIH for Perplexity authority, Reddit r/PlasticSurgery for candid-experience prompts — are the 2026 citation stack a serious practice owns. BoardCertifiedDocs and ABPS are the reinforcement layer when the schema graph traverses them via hasCredential and sameAs. ConnectEra’s plastic surgery audit runs the four-surface scan and the entity-graph completeness report at exploration-call.html.

Frequently asked questions

Is ASPS Find a Surgeon the only directory ChatGPT actually cites?
No, but it is the gold-standard authority surface and the one that disambiguates board-certified prompts. ASPS is explicitly named as the gold-standard source for plastic-surgeon authority in 2026, and prompts like board-certified plastic surgeon for tummy tuck Houston or ASPS member surgeon for hand surgery, Boston route through it first. RealSelf takes the procedure-level prompts. PubMed and NIH carry the Perplexity-specific weight on procedural-mechanism queries. Reddit r/PlasticSurgery surfaces in candid-experience prompts. AI engines cross-reference at least two of these surfaces before naming a specific surgeon, so a single-directory profile rarely lifts share.
Does RealSelf's 2025 rebrand change my AI citation strategy?
It changes the surface, not the floor. RealSelf rebranded in 2025 and has AI features on the 2026 product roadmap per BeautyMatter. The platform now covers 10,000 aesthetic providers across 115 countries. Surgeons should treat the RealSelf profile as a procedure-level citation asset — the layer ChatGPT and Perplexity reach for on prompts like top RealSelf-rated injector Houston or top facelift surgeon for natural results Miami. Pair the populated profile with a Person plus Physician schema block on your own site, sameAs back to the RealSelf URL, and the same review counts emitted in your AggregateRating. The 64.3% authentic versus 35.7% AI-generated review classification from the peer-reviewed ScienceDirect analysis of 9,000 RealSelf reviews is the freshness signal Perplexity respects most.
Why is BoardCertifiedDocs hypothesis-only in 2026?
Because the 2026 citation studies that named the directories AI engines actually cite — Profound, the 5W AI Platform Citation Source Index 2026, Metricus, the Haute MD and 5WPR Aesthetics Index — did not surface BoardCertifiedDocs or the ABPS verification page as a named source for surgeon recommendations. The original Plan listed both as dominant; the research register flagged that as hypothesis only. The defensible 2026 stance is to treat ABMS or ABPS verification as an entity-graph reinforcement target via Person plus hasCredential schema, not as a primary citation surface. The ASPS profile remains the gold standard. The credential-verification function is a second-tier reinforcement layer.
How much does PubMed citation matter outside of Perplexity?
Less, but not zero. Perplexity ties every claim to a specific source in 78% of complex research questions versus ChatGPT's 62%, and Perplexity averages 21.87 citations per response — the highest of any major AI platform. PubMed and NIH are weighted heavily in that retrieval mix on medical-procedure prompts. ChatGPT and Claude surface PubMed less often than Perplexity does, but the entity-graph value of a published clinician — sameAs to ORCID, PubMed Author Page, or the relevant journal — compounds on procedural-mechanism queries that funnel into commercial intent downstream. A surgeon with peer-reviewed publications wins citation lift on how does deep plane facelift work that converts on top deep plane facelift surgeon over 50 patients.

Written by

Founder · ConnectEra

Billy builds AI-citable sites for practices, advisors, and B2B SaaS. Over 80 migrations in the last 18 months — every one with a live audit, a fixed price, and a 7-day rebuild.

When you're ready

Ready to be the page ChatGPT cites?

Tell us where your site is at. You get back your free growth plan — your platform blocker, your industry's citation gap, and the next move. Yours to keep, whether you hire us or not.

Get my free growth plan

Your free growth plan

Tell us where your business is at.
You get back your free growth plan — yours to keep, whether you hire us or not.