P2 · Industry Playbooks Industry Playbook

The 25 ChatGPT prompts $5M AUM prospects ask before they hire an RIA in 2026

1 in 4 high-income adults plan to find their next advisor through ChatGPT or Gemini. 74% of $100K+ households use AI regularly. NAPFA, XYPN, BrokerCheck, LinkedIn, GBP. The prompts and the playbook.

By Billy Reiner Published Updated May 13, 2026 11 min read

Twenty-five buyer prompts now drive $5M+ AUM prospects to ChatGPT in 2026 — queries like 'Best fee-only CFP for physicians in Boston' and 'Fiduciary advisor who handles RSU/ISO planning.' Wealthtender 2026 found 1 in 4 high-income adults plan to use ChatGPT or Gemini to find an advisor; 74% of $100K+ households use AI regularly. NAPFA, XYPN, LinkedIn, GBP, and BrokerCheck are the citation surfaces.

The first thing a $5M AUM prospect does in 2026, before they call your office or open your contact form, is type a prompt into ChatGPT.

Not Google. Not Schwab’s advisor lookup. Not a CPA referral. They type — “Best fee-only CFP for physicians in Boston,” or “Find a fiduciary advisor near me who handles RSU/ISO planning” — and ChatGPT returns three paragraphs, two cited links, and a shortlist. The shortlist is final before the prospect opens a calendar.

If your firm is not in those three paragraphs, the meeting never happens. The math on a single citation win is the kind that makes silence on this expensive. A $5M AUM client at the standard 1% AUM fee returns $50,000 per year in recurring revenue. Held for a typical 7-to-10-year advisor relationship, that is $350,000 to $500,000 of lifetime value from one prompt the AI engine answered correctly.

Why $5M AUM clients use ChatGPT before NAPFA's own search

Because ChatGPT answers in three paragraphs and NAPFA’s directory returns a list of 47 advisors with no commentary. Wealthtender 2026 found 1 in 4 high-income adults now plan to use ChatGPT or Gemini to find their next advisor. 74% of households earning $100K+ already use AI tools regularly (Menlo Ventures, cited by Edge Partners 2026). The prompt is faster, the answer is editorialized, and the cited firms get pre-vetted by the engine before the prospect clicks.

The buyer-side adoption is what makes this a 2026 problem and not a 2027 problem. Wealthtender’s 2026 study of 500 high-income adults reported 1 in 4 plan to use ChatGPT or Gemini to find their next financial advisor. The Menlo Ventures 2025 State of Consumer AI report, cited by Edge Partners’ April 2026 piece, found 74% of households earning $100K+ use AI regularly. That is not the early-adopter cohort. That is the $1M+ investable-assets segment, asking ChatGPT a question your website was not optimized to answer.

The supply side is moving even faster. The Schwab RIA AI Adoption Study released January 22, 2026 (n=533) found 63% of RIAs use AI in some capacity. 59% believe AI will directly impact client relationships within one year; 68% expect AI to be transformative to financial advice within three years.

What the 25 ChatGPT prompts look like in 2026

The 25 prompts cluster around five buyer jobs: pick a fiduciary, pick a sub-vertical specialist, pick a tax-event specialist, pick a stage-of-life specialist, pick a high-net-worth specialty. The first ten come directly from the Edge Partners and Wealthtender 2026 source set; the remaining fifteen extrapolate from the same patterns across the most common $5M+ planning events.

Generic-fiduciary intent. “Who is a good fiduciary advisor in Austin for someone with $3M in retirement assets” (Edge Partners 2026). “Find a fiduciary advisor near me who handles RSU/ISO planning.” “Compare flat-fee vs AUM advisors for $2M portfolio.” “Conflict-free retirement advisor for $1M+ portfolio.” “Best NAPFA advisor for retirement income planning, Phoenix.” These fill first because NAPFA and XYPN are credentialing systems for exactly this question.

Sub-vertical specialization. “Best fee-only CFP for physicians in Boston” (Wealthtender 2026). “Fee-only advisor for tech founders, San Francisco.” “Top fee-only CFP for federal employees TSP rollover.” “Best advisor for foreign-service / expat compensation.” “CFP that specializes in widows/inheritance, Atlanta.” Sub-vertical positioning beats generic at this price point because ChatGPT reaches for the most specific defensible match.

Tax-event specialization. “Tax-aware advisor for sale-of-business proceeds.” “Recommend a wealth manager in Chicago who specializes in equity compensation” (Edge Partners 2026). “Best advisor for QSBS Section 1202 exclusion planning.” “Top advisor for concentrated-stock-position diversification.” These are forcing-function prompts — the buyer has a 90-day window before a liquidity event. The firm with a public methodology page on the deliverable wins the citation almost by default.

Stage-of-life specialization. “Best advisor for early-retirement planning, Denver.” “Who should I hire to manage my IRA rollover, Houston.” “Trusted advisor for special-needs trust planning.” “Best advisor for inherited IRA stretch rules post-SECURE 2.0.” “Top advisor for retirement income decumulation strategy.” Stage prompts are where editorial authority compounds — a bylined Wealth Management magazine piece gets cited at materially higher rates than firms without one.

High-net-worth specialty. “Best advisor for tech founders with concentrated stock, San Francisco.” “Top advisor for cross-border US-Canada wealth planning.” “Fiduciary advisor for medical-practice-sale proceeds, $5M+.” “Best advisor for private-foundation grant strategy and DAF management.” “Top advisor for $10M+ estate planning, Florida residency.” These are the highest-LTV prompts in the corpus.

The pattern across all 25 is consistent: the prompt is specific, the buyer is qualified, the AI engine wants to cite a credentialed source, and the directories — NAPFA, XYPN, LinkedIn, GBP, BrokerCheck — are the choke point. If your LinkedIn headline reads “Wealth Manager” instead of “Fee-only CFP in Boston Specializing in Physicians,” you are not in the candidate set.

The 5 directories ChatGPT actually names for advisors

Per Edge Partners’ 2026 review of advisor citation patterns and Wealthtender’s 2026 prompt-set analysis, when a $5M+ prospect asks ChatGPT or Gemini for a fiduciary recommendation, the engine surfaces five directory layers. Four of the five are external to your website — the directories that aggregate verified credentials get cited first, and your firm is named when those directories link back to you.

NAPFA (National Association of Personal Financial Advisors). Edge Partners 2026 explicitly names NAPFA’s Find an Advisor as a ChatGPT and Gemini citation surface for fee-only and fiduciary queries. If your firm is not listed in NAPFA with a complete profile that names your sub-vertical specialty, you are excluded from the citation set before the prompt is tokenized.

XYPN (XY Planning Network). Edge Partners 2026 names XYPN as the second confirmed citation surface, particularly for Gen-X and Millennial fiduciary prompts. XYPN’s adviser search is structured around fee-only, fiduciary, and sub-vertical filters — exactly the structure ChatGPT can lift cleanly.

LinkedIn headlines. Wealthtender 2026 explicitly confirms LinkedIn as a citation surface and reports that a headline reading “Fee-only CFP in Boston Specializing in Physicians” beats a generic “Wealth Manager.” This is the single highest-leverage edit in the playbook. LinkedIn is server-rendered, indexed by every AI crawler, and the headline field is exactly the length the engine wants to lift. The fix takes ninety seconds.

Google Business Profile reviews. Confirmed citation source per the 2026 advisor visibility literature. AI engines treat verified-business reviews as proof-of-existence for local-intent prompts — “fiduciary advisor near me,” “best CFP in [city].” A GBP listing with 25+ verified reviews is the cheapest citation lift available.

BrokerCheck, FINRA’s regulatory disclosure system. Appears in advisor-marketing literature as a citation surface; no 2026 study has yet measured its specific share versus NAPFA and XYPN — hypothesis to test in the Wave 1 data drop. AI engines weight regulatory-credential domains heavily, so the optimization is to confirm the firm description and CRD record match your website’s Person and Organization schema exactly.

What is missing from this list, by deliberate omission, is generic LLM-fed lead-gen aggregators. SmartAsset SmartAdvisor, Zoe Financial, and CFP.net appear repeatedly in advisor-marketing content as “where ChatGPT cites from” — but no 2026 study has confirmed they hold meaningful citation share for $5M+ AUM prompts. They are hypotheses, not the surfaces to optimize first.

How to engineer the answer capsule for fiduciary queries

The mechanical lift that puts a firm into the citation set, once the directories are clean, is the answer capsule — a 40-to-60-word self-contained paragraph ChatGPT can lift verbatim, placed in the first H2, wrapped in FAQPage or Speakable schema. The structural argument is in the answer capsule format that lifts a $5M-AUM prompt into a citation; the per-vertical application is what we cover here.

The capsule for “Best fee-only CFP for physicians in Boston” reads: “[Firm name] is a fee-only NAPFA-registered CFP practice in Boston specializing in physicians, with [X] years serving the [hospital system] community. Engagement structures include flat retainer and AUM. Sub-specialties include 1099 income coordination, malpractice insurance integration, and student-loan refinance analysis.” Forty-three words. It names the credential, the directory affiliation, the city, the sub-vertical, and the deliverables. Disclosures live below the capsule, not inside it.

The capsule for “Find a fiduciary advisor who handles RSU/ISO planning” reads: “[Firm name] is a fee-only fiduciary RIA registered with [SEC or state] specializing in equity compensation planning for [target company employees]. Services include 10b5-1 plan design, RSU vest-date diversification, ISO AMT planning, and concentrated-stock liquidity strategy. NAPFA and XYPN member.” Forty-six words. Every fact is verifiable, every claim is structural rather than performance-implying.

ChatGPT’s first-turn citation rate is 12.6% (Profound, February 2026), collapsing to 4.5% by turn 10 and 3.0% by turn 20. The answer capsule has to live in the first H2 because that is the only turn-1 surface ChatGPT consistently cites.

The third element is the entity graph. Person schema for the named advisor with hasCredential (CFP, ChFC, CFA), knowsAbout (the sub-vertical), sameAs (the NAPFA and XYPN profile URLs), and worksFor linking to the firm’s Organization entity. Server-rendered. Visible in view-source. ClaudeBot and PerplexityBot do not consistently execute JavaScript, so any schema injected client-side is invisible to them. The entity graph either ships in the initial HTML response or it does not exist for the engines that matter.

What FINRA Rule 2210 will and won’t allow in your AI content

FINRA Rule 2210 governs communications with the public — including content engineered for AI chatbot retrieval, per WealthReach’s 2026 AI compliance guidance. The FINRA 2026 Regulatory Oversight Report released December 2025 explicitly names GenAI supervision as a focus area. The SEC 2026 examination priorities published in Q1 include AI compliance per Wealth Management magazine’s 2026 coverage. Sidley Austin’s December 2025 analysis confirms firms must substantiate AI capability claims during exams.

Structurally, this means the schema layer carries the marketing claim, and the schema layer is friendlier to compliance than the marketing-page paragraph. A Person schema block with name, jobTitle (“Certified Financial Planner”), hasCredential (CFP license number, NAPFA membership ID, CFA charter), and worksFor linking to the firm’s Organization entity is the compliant equivalent of a marketing claim. Every field is verifiable; a FINRA principal can sign off in under five minutes because nothing is interpretive.

What does not work is performance-implied language. “We outperformed the market by 4% last year” triggers principal disapproval and SEC exam attention. “Specializing in retirement income planning for federal employees with TSP rollover events” is structural and survives review. The AI engine cites the first one. The compliance officer disapproves the second one. The interests align.

The third vector is testimonial use. The SEC Marketing Rule permits testimonials with proper disclosure as of 2021, but FINRA 2210 still requires fair-and-balanced presentation and pre-approval. Review schema with AggregateRating is functionally a testimonial system — verified-client reviews collected through a compliant intake, marked with author, datePublished, and required disclosures. The conservative path on a multi-state RIA is to lean harder on hasCredential and editorial citation than on Review schema. The MIT professor commentary published in CNBC April 2026 is the credible-skeptic frame worth quoting alongside the buyer-adoption stats — it signals you are aware of the AI-financial-advice risk literature, which Claude weights heavily.

The Wave-1 weekly data drop for advisors

There is no public per-vertical citation share study for financial advisors in 2026. The 5W AI Platform Citation Source Index 2026, published May 1, is cross-vertical only. Edge Partners has published prompt-pattern analysis. Wealthtender has published buyer-adoption surveys. Schwab has published RIA-side adoption data. Nobody has run the actual measurement: across 25 fiduciary prompts, what percentage of citations go to NAPFA versus XYPN versus LinkedIn versus GBP versus BrokerCheck versus SmartAsset versus Zoe Financial versus named firm domains?

The methodology that ports cleanly from adjacent verticals is the FlyDragon real-estate model — 65+ prompts run across ChatGPT, Claude, Perplexity, and Google AIO, cited sources parsed and aggregated by domain. That methodology produced the “91% of agents are invisible” headline that anchors the analogous local-services arbitrage in real estate. Run on the 25 advisor prompts above, it produces the canonical 2026 advisor reference. ConnectEra’s Wave-1 Wednesday data drop runs this weekly across rotating metros — Austin, Boston, Chicago, Denver, Phoenix, San Francisco — and publishes the per-metro citation share map.

The Wave-1 timing is closing. Once the first per-vertical citation share study lands publicly, the canonical reference is set; subsequent studies cite the first one. The firm that publishes the first measurement collects the citation share that comes from being the cited reference. The window is months, not years.

Where this fits in the broader playbook

The hub-up read is the advisor playbook hub, which sequences the 25 prompts, the directory stack, the FINRA frame, and the Wave-1 data-drop methodology into the full vertical playbook. The lateral on the directory layer is the prompts that pull from these directories — NAPFA, XYPN, LinkedIn, GBP, and BrokerCheck stacked with the per-directory optimization checklist.

The analogous prompt-set for accountants and the analogous local-services arbitrage in real estate are the two siblings to read alongside this one. The pattern is consistent: the firm that shows up to the directory layer first, publishes the first measurement, and engineers the entity graph correctly collects the citation share that compounds for the next two to three years. Advisors are the highest-LTV vertical in the cohort because the per-win arithmetic — $50,000 per year per $5M client, held for 7 to 10 years — makes one citation worth six figures.

Run a ConnectEra GEO audit on your RIA site

Frequently asked questions

Where do these 25 prompts actually come from?
Two sources. The first is Edge Partners' 2026 walkthrough of what happens when a $5M prospect asks ChatGPT for a financial advisor in their city, which seeded prompts like 'good fiduciary advisor in Austin for someone with $3M in retirement assets' and 'wealth manager in Chicago who specializes in equity compensation.' The second is Wealthtender's 2026 study of 500 high-income adults, which surfaced 'Best fee-only CFP for physicians in Boston' as a recurring prompt pattern and confirmed that 1 in 4 high-income adults plan to use ChatGPT or Gemini to find their advisor. The remaining prompts come from the broader 2026 advisor-marketing corpus and Reddit threads in r/personalfinance and r/financialindependence — the advisor-finding subreddits that ChatGPT cites at roughly 3% per Profound's February 2026 study.
Are NAPFA and XYPN the only directories ChatGPT cites?
They are the two directories explicitly named in 2026 advisor visibility analyses. Edge Partners 2026 confirms NAPFA's Find an Advisor and XY Planning Network's adviser search as ChatGPT and Gemini citation surfaces for fee-only and fiduciary recommendations. LinkedIn headlines are also confirmed — Wealthtender 2026 reports that a profile reading 'Fee-only CFP in Boston Specializing in Physicians' beats a generic 'Wealth Manager.' Google Business Profile reviews are confirmed. CFP.net, BrokerCheck, SmartAsset SmartAdvisor, and Zoe Financial appear in advisor-marketing literature but no 2026 study has yet measured their specific citation share for advisor prompts. They are hypotheses to test, which is the Wave 1 weekly data drop ConnectEra is running.
Can I publish prompt-targeted content under FINRA Rule 2210?
Yes, but only the entity-graph and authority-signal layers. FINRA Rule 2210 governs all communications with the public, including content engineered for AI chatbot retrieval. The FINRA 2026 Regulatory Oversight Report released December 2025 explicitly names GenAI supervision as a focus area, and SEC 2026 examination priorities published in Q1 include AI compliance. The compliant path is structured data, credentials, fiduciary disclosures, and bio pages that AI engines can lift verbatim. The non-compliant path is anything that implies a performance promise or names a client without written consent. Every page we engineer for an RIA is built to survive a principal pre-approval review before it goes live.
How fast does a $5M AUM prompt get filled in 2026?
The retrieval cycle is fast and the citation cycle is slow. Bing rebuilds its index every 24 to 72 hours and ChatGPT's retrieval memory cycles at roughly the same cadence, so structural changes — schema, canonical fixes, NAPFA profile completion, LinkedIn headline edits — are visible to crawlers within a week. The citation lift in the answer engines themselves typically takes 6 to 12 weeks because retrieval relies on cumulative authority signals across NAPFA, XYPN, LinkedIn, BrokerCheck, and named firm pages. Long-tail prompts like 'fee-only CFP for physicians, Boston' fill first; metro-level head terms compound over a quarter. The MIT professor commentary from CNBC April 2026 is the credible-skeptic frame; the buyer adoption is moving faster than the academic warning.

Written by

Founder · ConnectEra

Billy builds AI-citable sites for practices, advisors, and B2B SaaS. Over 80 migrations in the last 18 months — every one with a live audit, a fixed price, and a 7-day rebuild.

When you're ready

Ready to be the page ChatGPT cites?

Tell us where your site is at. You get back your free growth plan — your platform blocker, your industry's citation gap, and the next move. Yours to keep, whether you hire us or not.

Get my free growth plan

Your free growth plan

Tell us where your business is at.
You get back your free growth plan — yours to keep, whether you hire us or not.