The dashboard says the channel does not exist. The pipeline says deals are closing from sources nobody can name. Sales asks marketing where the leads are coming from; marketing asks sales how they keep arriving pre-qualified. Both are right. The buyer is two steps ahead of the funnel that was built for them.
This is what the AI buyer journey looks like in 2026. The first half of the journey moved into the chat window — out of Google, out of the SEO playbook, out of the analytics suite that was supposed to track it — and the page that finally gets the click is the page built for the visitor who never read the chat window first.
What is the AI buyer journey in 2026?
The AI buyer journey is the path B2B buyers now take through ChatGPT, Claude, Perplexity, or Gemini before contacting a vendor. Per 6sense 2026, 70% of the decision journey is completed before first form-fill. Per PR Newswire March 2026, 73% of B2B buyers use AI in research, 69% switch vendor based on AI guidance, and 33% buy from a vendor they did not know existed before AI surfaced it. The buyer arrives pre-qualified or does not arrive.
The structural difference is not preference; it is sequence. The Google buyer types a keyword and lands on a comparison page to begin research. The AI buyer reads the recommendation, the reasoning, and the alternatives in the chat window, then clicks through to confirm. Same vendor, same page, two completely different cohorts. The Google buyer arrives at stage one of the funnel. The AI buyer arrives at stage four.
Why 70% of B2B decisions are pre-formfill in 2026
The 70%-pre-formfill number comes from 6sense’s 2026 buyer-behaviour study, surfaced in Deep Marketing’s March 2026 analysis of multi-source B2B research. It is the load-bearing number for the rest of this article. If the buyer is still 50% pre-formfill, the Google playbook holds with a few tweaks. At 70%, it breaks.
Three datasets independently corroborate the shift. G2’s Answer Economy Report, published April 15, 2026 from a sample of 1,076 B2B software buyers, found that 51% now start research with AI chatbots more often than with Google — up from 29% just eleven months prior. 71% rely on AI chatbots somewhere in the software research process. 93% say AI chatbots have fundamentally changed how they research. 86% increased their AI chatbot usage in the past year alone. Forrester’s 2026 Buyer Insights raises the headline number further: 94% of B2B buyers use ChatGPT, Perplexity, or Gemini somewhere in the path to a vendor shortlist. Gartner reports that 68% of enterprise deals closed in 2025 had at least one generative-search touchpoint in the buyer journey.
The slope of those numbers matters as much as the levels. G2’s 51% is up from 29% in May 2025 — a 22-point gain in eleven months. Whatever the exact percentage on the day a strategy is reviewed, it will be higher by the next quarter. The window for a marketing motion that ignores AI as a discovery channel closed sometime in late 2025.
The implication for measurement is not subtle. If 70% of the decision is finished before any analytics property records a touch, then GA4 sees the wrong session as the “first touch” by definition — it sees the verification visit, not the discovery moment that happened in the chat window. This is the same gap the dashboard blind spot covers in detail, and it is why most CMOs in 2026 are looking at attribution that under-reports AI by 60-70%.
Why does AI traffic converge bimodally on the same page?
A page built only for Google organic traffic typically converts at 3-5%. The same page hit by AI traffic converts at either 11%-plus or 1%-2% — there is no middle. The high-converting AI cohort arrives pre-qualified and finds the answer they expected; the low-converting cohort arrives pre-qualified, finds a discovery-stage page, and bounces immediately. Variance is bimodal because the buyer already has an answer when they click.
This is the central operational point. The 31% blended conversion premium that Search Engine Land verified in 2026 — 1.81% for ChatGPT-referred ecommerce traffic vs 1.39% for non-branded organic — is an average across two very different distributions. Pages that match the AI buyer’s stage-four intent see AI cohorts converting at 11.4% globally per The Stacc 2026, and per-engine session value of $4.56 (Claude) / $3.12 (Perplexity) / $2.34 (ChatGPT) per Metricus’s 2026 Shopify data. Pages that mismatch the intent see AI cohorts dropping into the floor of the distribution, sometimes below the Google baseline. The blended average is the artifact, not the truth.
For the conversion math defended end-to-end, with the per-engine session-value spread and the methodology critique, see the dedicated piece. For the page rebuild that flips a Google-shaped page into a page that captures both cohorts, see the nine-element AI landing-page rebuild.
The 33% who pick a vendor they did not know before AI surfaced it
This is the number that should change quarterly planning. Per PR Newswire’s March 2026 release, 33% of B2B buyers purchase from a vendor they were not familiar with before the AI surfaced it. One-third of pipeline now originates from a chat-window introduction the prospect had never heard of in any other channel. No newsletter touched them. No retargeting reached them. No salesperson dialed them. The AI named the vendor; the buyer trusted the recommendation; the deal closed.
The 33% pairs with the 69% number from the same dataset: 69% of B2B buyers choose a different vendor than they initially planned, based on AI chatbot guidance. Read together, the two numbers describe a market where chat-window citation is now the single largest pre-pipeline determinant of vendor selection. A decade of brand spend gets reweighted at the moment a ChatGPT answer lists three vendors and recommends one.
This is also the mechanism behind the verticals where 87% to 91% of practitioners are functionally invisible. Per FlyDragon’s 2026 real-estate benchmark, 91% of US agents are effectively invisible in AI search; the top 1% capture 47% of citation share. Per the 5W HVAC & Plumbing Visibility Index Q1 2026, 87% of independent HVAC and plumbing contractors have zero AI citation share in their own metro. Per Common Mind’s 2026 State of AI Visibility report, 44% of B2B SaaS companies are functionally invisible to AI buyers. The 33% number is what flows toward the cited 1%.
The competitive frame is unforgiving. If a vendor is not in the AI’s answer set, the 33% never sees them — and the 69% who switch are switching toward the cited shortlist, not toward the absent. A site that does not get cited is not under-performing on AI traffic; it is structurally absent from one third of the available pipeline.
Why the Google buyer journey playbook fails AI buyers
The Google buyer journey playbook was built around a buyer who arrives early. The discovery-stage hero, the pain-point framing, the gated lead magnet, the multi-touch nurture — every component assumes the visitor needs to be educated into the category before being asked to convert. The page is patient. It teaches before it sells.
The AI buyer is the opposite. They arrive at the page already taught. ChatGPT presented the category, named the alternatives, summarised the trade-offs, and recommended a vendor. The buyer clicks to verify the recommendation, not to begin learning. A discovery-stage hero reads as a category they already know. A pain-point opener reads as a problem they already accepted. A gated lead magnet reads as a friction the AI did not warn them about. The page that converted Google traffic at 4% loses the AI cohort to the bounce.
The mismatch shows up in three load-bearing places.
Page-zero intent. Google-shaped pages open with the question; AI-shaped pages have to open with the answer. The buyer arrived because the AI answered a question — putting the question back on the page restarts a journey the buyer already completed. The above-the-fold answer capsule that confirms the AI’s recommendation in the buyer’s own language is what closes the verification loop.
Query specificity. Google traffic clusters around short, generic, head-term queries because that is what Google rewards. AI traffic clusters around long, narrative, conversational queries because that is what people actually ask in chat. The same site receives “best CRM” from Google and “best CRM for solo financial advisors managing $50M-$200M AUM with light compliance overhead” from Perplexity. The page that ranks for the head term loses the long-tail buyer to the next site that mirrors the conversational query.
Conversion timeline. The Google buyer journey runs over weeks — multi-touch, nurture-driven, lead-magnet-gated. The AI buyer journey runs in a single session. They saw the answer ten minutes ago, clicked through, scanned for confirmation, and either booked a call or left. There is no second visit. There is no nurture sequence. The conversion has to happen in the verification window or it does not happen at all. This is also the dynamic underneath the Loamly RFC 9421 attribution layer — the cryptographic signature is what proves the chat-window-to-conversion path inside one session, since GA4 cannot see the discovery half.
What changes about the page when the buyer is pre-qualified?
Three above-the-fold changes. The hero opens with the answer the AI already gave (verification, not discovery). Proof moves above the fold (logos, named results, hasCredential entities) so the verification check resolves in the first scroll. The form matches stage four (booking, pricing-direct, free-trial-direct) instead of stage one (newsletter, ebook). The page below the fold can still serve the Google buyer — most of the work is in the first viewport.
What changes about the landing page when the buyer is pre-qualified
The page that converts both cohorts on one URL has a clear shape. The above-the-fold is rebuilt for the AI buyer; the below-the-fold continues to serve the Google buyer. Most of the structural work happens in the first viewport.
The opener is the answer capsule that earned the citation in the first place. A 40-60 word block that confirms the AI’s recommendation in compatible language. The format compounds: the same capsule that lifts citation rate per Search Engine Land’s 2026 finding is the block that resolves verification fastest for the buyer who arrived from the citation. One unit of work, two outcomes.
Proof moves up. Named clients, named credentials, named results. The AI buyer is verifying that the recommendation was correct; the proof block has to satisfy that check inside the first scroll. The Google buyer reads the same proof later, after the educational frame, and the page still works for them. Hoisting proof up does not break the discovery cohort; it unlocks the verification cohort.
The form matches the stage. A newsletter signup or ebook download is a stage-one conversion designed for the Google cohort. A pre-qualified AI buyer reads it as a step backward and bounces. The form that pairs with stage-four intent is a booking form, a pricing-direct page, or a free-trial-direct path that does not gate the next step behind a content exchange. For high-LTV B2B verticals, the booking-form variant typically lifts AI conversion from sub-2% into the 8-15% range without measurable damage to Google conversion, because Google buyers who get to the bottom of the page convert on the same form anyway.
The trust signals match the AI’s citation criteria. Schema completeness, named entities, hasCredential, organization-level sameAs links. Per Growth Marshal’s February 2026 study, pages with full attribute population get cited at 54.2% vs 31.8% for sparse schema — a 22-point gap inside a DR-≤-60 sample. The same entities that lift citation rate are the ones the buyer scans for trust. The page does double duty: it earns the next AI citation while it converts the buyer who arrived from the last one.
The vendor-shortlist citation pipeline
The shortlist is the operational unit of the AI buyer journey. ChatGPT, Claude, and Perplexity build it inside the chat window — three to seven named vendors, ranked, with reasoning — and the buyer treats it as the working set. Per G2’s 2026 Answer Economy Report, the top citation sources influencing the shortlist are AI chatbots themselves at 54%, software review sites at 43%, market research firms at 36%, vendor sites at 36%, peers at 33%, and independent forums at 30%. 45% of B2B buyers say review-site citations are the most confidence-inspiring AI signal.
The pipeline is the chain that decides whether a vendor lands on the shortlist or stays out. Three components carry most of the weight. Wikipedia continues to be the #1 ChatGPT citation source — Profound’s February 2026 update revised it to ~5% per-citation and ~18% per-conversation; the 5W AI Citation Source Index May 2026 puts the cross-vertical share for top-15 domains at 68% of all citations. Review sites moved up after G2 acquired Capterra, Software Advice, and GetApp from Gartner on February 5, 2026, consolidating the citation surface; the combined entity now reaches over 200 million annual buyers and over 6 million verified reviews. Per SE Ranking’s late-2025 study, G2/Capterra carries roughly a 3× citation lift in AI Overviews.
For B2B SaaS specifically, the citation pipeline that decides the shortlist operates at vertical-niche resolution: health-tech, legal-tech, fintech each have their own shortlist mechanics. The generic SaaS layer is heavily contested by 2026 — Discovered Labs, AuthorityTech, Foundation Inc., Powered by Search, and Metricus all run AI-visibility plays at the generic level. The vertical sub-niches are still open arenas where no GEO agency leads.
The pipeline shape — Wikipedia entity, review-site presence, named-credential entity graph, FAQPage schema, freshness-marked own-domain content — is the same shape that earns citation in every vertical. What changes is the directory mix and the regulatory layer. The AI buyer’s shortlist is downstream of those pieces. If they are missing, the vendor never appears in the chat window’s named set, and the 33%-net-new and 69%-switching pools route around them.
The AI buyer journey is not a refinement of the Google buyer journey. It is a different journey, with different stages, different pages-zero, different conversion timelines, and a different infrastructure dependency. 70% of the decision is over before the first form-fill. 73% of buyers use AI to do that decisioning. 69% switch vendor based on what the AI says. 33% buy from a vendor they had not heard of before the AI named them.
The Google playbook was built for a buyer who needed to be educated into a category before being asked to convert. The AI buyer arrives educated. The page that wins them is the page that matches stage four — answer-capsule-led, proof-first, booking-direct, schema-complete — on the same URL that still serves the Google cohort below the fold.
If you want to see the conversion math, the dashboard blind-spot, the page rebuild, and the attribution layer end-to-end, the conversion pillar pulls the full stack together. If you want a 30-minute audit of where your site sits in the AI buyer’s shortlist for your own vertical — and where the citation pipeline is broken — book a call.