Citation Labs partnered with Eric Van Buskirk and the team at Clickstream Solutions to study how people build shortlists close to a buying decision, and how that process differs between Google AI Mode and Google’s standard Search experience.
The study observed 48 U.S.-based participants completing 185 major-purchase tasks across four categories: televisions, washer/dryers, teen-driver insurance, and high-end laptops for small businesses. Participants used either Google AI Mode or Google’s standard Search experience to narrow options to a shortlist—not to make a final purchase decision
The findings point to a meaningful shift in how buyers compare options, form trust, and narrow the field, with clear implications for which brands get seen, cited, and considered.
Teams responsible for brand visibility face a tougher environment as AI answers absorb more of the buying journey and search traffic gets harder to win.
If Google now shapes the shortlist, trust, and brand inclusion earlier in the process, visibility work has to start before the click.
Note: For SEOs, this standard Search condition reflects Google’s blended results environment, where AI Overviews and other SERP features may still play a role in the path.
AI Mode Now Shapes The Shortlist Before The Click
AI Mode changes shortlist formation by compressing much of the comparison process into the answer itself.
In the study, 74% of AI Mode shortlists came straight from the AI output, and 64% of participants clicked nothing before naming finalists.
| Task | No-Click Rate | n |
|---|---|---|
| Insurance | 85% | 27 |
| Television | 67% | 46 |
| Laptop | 69% | 29 |
| Washer/Dryer | 47% | 47 |
| All AI Mode | 64% | 149 |
The shortlist often forms inside Google before a buyer ever reaches a brand site.
In AI Mode, participants often read the response, checked an inline card or snippet if needed, and moved forward with the set Google had already framed. In standard Search, participants more often clicked out, compared options, and assembled the shortlist across multiple sources (even though AI Overviews could still shape part of the experience).
AI Mode sped up research and reduced the amount of independent comparison that many buyers felt they needed to do.
For marketers, this shifts the competitive moment upstream.
The old assumption was that buyers clicked through sources, compared options, and then decided. The data shows a different path in AI Mode.
Google often supplies the candidate set, shapes the framing, and keeps the buyer inside that set even when they do click.
This raises the importance of appearing in AI-generated shortlists for BOFU queries, because brands that don’t appear may never reach the comparison stage at all. It also weakens the idea that standard rankings and downstream click opportunities tell the whole story.
In AI Mode, inclusion on the shortlist matters more than traffic.
AI Answers Frame the Decision
Trust forms differently in standard Search and AI Mode because buyers rely on different signals to determine what feels credible.
In AI Mode, Google’s framing plays a much larger role in the decision. The study found that the way AI described and structured the options shaped the choice in 48% of AI Mode tasks, compared with 6% in Search.
In standard Search, trust more often came from corroboration across sources. Multi-source convergence showed up in 37% of Search tasks and only 4% of AI Mode tasks.
Standard Search users build confidence by checking across sources. AI Mode users place more weight on how Google assembles and explains the options in front of them.
That shift changes where buyers build shortlists.
In standard Search, the buyer still does more of the validation work by opening pages, comparing claims, and looking for consistency. In AI Mode, much of that shortlist formation happens before the click. If Google frames an offer clearly, aligns it to the buyer’s constraints, and reinforces familiar signals, it can carry the shortlist forward without much outside triangulation.
For marketers, that raises the value of clearer product language, stronger positioning, and stronger reinforcement across the sources AI is likely to retrieve and synthesize.
It also sharpens why citation optimization matters.
If AI systems build answers from recurring third-party signals, then editorial presence, citation strength, and consistent framing across crawlable sources become part of BOFU performance.
They influence whether a brand gets described clearly, trusted quickly, and considered before a buyer ever reaches the site.
AI Visibility And Rank Now Shape Brand Choice
AI Mode narrows the field before many buyers begin a true comparison.
The study found that brands missing from the AI output often didn’t enter consideration at all. No participant identified a brand absent from the AI shortlist, and participants didn’t exhibit behavior that would meaningfully broaden the set beyond that shortlist.
Buyers didn’t break out of the shortlist to standard Search for additional options, and even brands that did appear could get dismissed quickly if the name felt unfamiliar.
In practice, that means many brands don’t lose because buyers compare them and reject them. They lose because buyers never meaningfully see them in the first place.
Rank still matters inside that shortlist, but it works differently from Standard Search Rank.
In AI Mode, 74% of participants chose the top-ranked item, and only 10% chose something ranked third or lower.
| Measure | AI Mode (n=137) |
|---|---|
| Choose rank 1 | 74% |
| Mean rank of first choice | 1.54 |
| Choose rank 3+ | 10% |
| Rank override rate | 26% |
| Of overrides: still AI Adopted | 81% |
But when people did override rank, they usually stayed inside the AI’s set. In 81% of those override cases, participants still chose a different AI-recommended option rather than looking outside the shortlist.
That makes inclusion the first threshold, with position and brand recognition shaping who wins inside the set.
That creates a measurement problem for marketers.
If absence from the AI shortlist means absence from consideration, teams need to track which brands appear for BOFU prompts, where they appear, and how Google frames them before traffic ever reaches the site.
Xofu visibility data points in the same direction in several categories (with strong alignment in television and washer/dryer tasks, moderate alignment in laptops, and a weaker fit in driver insurance).
| Brand | Rank (Our Results) | Count (Our Results) | Rank (Xofu Results) |
|---|---|---|---|
| LG | 1 | 32 | 1 |
| Samsung | 2 | 13 | 2 |
| electrolux | 3 | 10 | 6 |
| Whirlpool | 4 | 8 | 4 |
| Amana | 5 | 1 | NA |
| GE | 5 | 1 | 3 |
In plain terms, the brands buyers chose were often the brands already most visible in the recommendation environment.
We explain what the above findings mean in more detail here.
This shifts the job from measuring clicks after consideration to measuring whether your brand gets considered at all.
The Click Now Serves Verification More Than Discovery
The click still matters, but it plays a different role in AI Mode.
Across all AI Mode tasks, only 23% involved at least one visit to an external site. In standard Search, that figure was 67% (for the laptop and insurance shortlisting tasks).
When people clicked out of AI Mode, they usually did so to verify candidates already under consideration. They visited retailer sites, manufacturer pages, and price-comparison tools mainly to confirm details rather than widen the field.
In standard Search, external visits more often supported discovery and comparison.
That changes what marketers looking to improve visibility need to prioritize.
Standard Search still supports discovery, and strong product pages, comparison pages, and external reviews still matter. But the click no longer marks the start of consideration as often as it used to.
Now, more of that work happens before the visit—inside Google’s recommendation layer.
This increases the value of off-site visibility.
Editorial coverage, review presence, and comparison listings still matter, but more as reinforcement than introduction. They help confirm what AI has already surfaced, close visibility gaps, and strengthen inclusion over time rather than simply driving top-of-funnel awareness.
The supporting sources buyers encounter after the shortlist forms still shape whether the AI’s recommendation holds up.
Brand Visibility Starts Before The Click
The click is no longer the start of consideration.
The data shows that consideration often forms earlier, within Google’s recommendation layer, where shortlist inclusion, framing, and third-party reinforcement shape who gets seen and trusted in the first place.
Standard Search still matters, but it no longer controls the full path to consideration.
That changes what marketers need to measure.
Teams need to know whether they appear in AI shortlists for the BOFU prompts that matter, where they appear, how Google frames them, and which outside signals reinforce or weaken that presence.
That makes two jobs more important: tracking shortlist visibility across BOFU prompts, and improving the third-party signals that help AI systems surface and reinforce a brand.
Read the full report for the full methodology, data, and category-level findings.


