Impact of AI on Search Click-Through Rates in 2026

AI Writing · ai overviews, brand demand, query intent, serp volatility, zero click search
Ivaylo

Ivaylo

March 18, 2026

Our analytics stopped making sense the week Google started answering questions on the SERP.

We were looking at the same keywords we’d held for years, the same rankings, the same content, and yet the clicks were just… gone. Not down because we “lost position.” Down because the page never got a chance. That’s the real impact of ai on search click-through rates in 2026: the answer is increasingly on Google’s page, not yours, and your blue link is now a “maybe, if they still care.”

A few numbers are doing a lot of work here. Ahrefs measured an average 34.5% drop in CTR for organic links when an AI Overview is present across 300,000 searches. Pew found traditional result clicks dropping from 15% without an AI summary to 8% with one, and that people click links inside the summary itself only about 1% of the time. Pew also found sessions end more often after AI-summary pages: 26% vs 16% on pages without summaries.

That’s not a tweak. That’s the click economy changing shape.

The new click economy in 2026: when the answer sits on the SERP

If you need a mental model that works under pressure, use this one: informational queries are slowly becoming “impression-first,” not “visit-first.” The SERP is doing the work your page used to do.

The easy mistake is to assume AI Overviews are still rare, therefore not worth restructuring reporting or content plans. Pew put AI summaries at 18% of Google searches by March 2025. Semrush data reported by Digiday (US desktop) showed growth from 6.49% in Jan 2025 to roughly 20% by April/May 2025, with a plateau around that level. seoClarity (also reported by Digiday) had AI Overviews appearing in about 19% of tracked US keywords in June 2025.

Those numbers vary because the datasets differ. The direction doesn’t.

One sentence on the main friction here: citations inside the overview do not “give the traffic back” if users rarely click those citations.

Measuring the impact of AI Overviews on CTR without lying to yourself

This is where teams fail. We’ve watched smart people pull a single Google Search Console export, see CTR down, and decide it’s “content quality” or “seasonality.” Then they change titles, rewrite intros, and blame the writers. Meanwhile the real variable is that Google started showing an AI Overview on half the query set they’re looking at, but only on certain days, devices, and phrasing variants.

What trips people up is that AI Overviews are not a stable “SERP feature on/off switch.” They appear, disappear, and sometimes shift position. The same keyword can behave differently across:

  • device (desktop vs mobile)
  • location
  • query wording (singular vs plural, question vs fragment)
  • time (Google tests layouts constantly)

If you blend impressions where AI Overviews were present with impressions where they weren’t, your CTR becomes an average of two different worlds. Then everyone argues about the wrong thing.

The measurement design we trust: query-level holdouts, not sitewide vibes

When we want to know if AI Overviews are hurting a set of pages, we do not start with “pages.” We start with queries, because AI Overviews trigger on queries.

Here’s the operational framework that has survived real audits for us.

First, pull queries from GSC for the pages or directories you care about, ideally 8 to 12 weeks. You need enough volume to avoid getting tricked by noise.

Then, annotate each query-day (or query-week if you have to) with whether an AI Overview appeared. This is the annoying part because GSC does not give you a clean “AIO present” dimension. You either:

  • buy SERP feature data from a provider that tracks AIO presence and position, or
  • run your own lightweight SERP capture for a sample (we’ve done this with headless Chrome, but you pay in maintenance), or
  • do a smaller manual audit for the top revenue queries to validate the direction before investing

We’ve blown this step before. Once, our first SERP capture was “fine” until we realized our script was hitting a cached layout that hid the overview until a scroll event. Two days of data, worthless. Petty stuff.

Now the key part: you create a holdout comparison using the same query when it appears under both conditions.

You’re not comparing Query A vs Query B. You’re comparing Query A on days when an AI Overview is present vs Query A on days when it isn’t.

That removes a ton of confounders, because the query intent and baseline demand are mostly stable. Not perfectly stable, but stable enough to be useful.

Minimum sample rules (so you stop overreacting)

Most teams don’t set any thresholds. They look at a 12-impression query and make a slide.

We require:

  • Both conditions must exist: the query must have impressions in a “AIO present” window and an “AIO absent” window.
  • Each condition needs enough impressions to make CTR meaningful. We usually start with a floor like 100 impressions per condition, then relax it only for high-value keywords we’re willing to inspect manually.
  • We don’t trust a query if it only shows AIO in one isolated week. That’s often a test bucket, not a durable SERP state.

You can pick different thresholds. The point is to pick them before you look at results.

The KPI map: stop asking one metric to tell the whole story

Teams keep asking CTR to answer questions CTR can’t answer. CTR is a SERP behavior metric. Revenue isn’t.

We keep four buckets, because AI Overviews create “visibility without visits,” and that breaks single-metric reporting.

1) GSC CTR and clicks, segmented by AIO present vs absent. This tells you the raw cannibalization pattern.

2) On-SERP zero-click proxy. You can’t directly see “zero-click” in GSC. What you can see is: impressions stay flat or rise, average position stays similar, and clicks drop disproportionately when AIO is present. That’s your practical proxy.

3) Session exits and shallow sessions from analytics, segmented by landing pages tied to high-AIO queries. Pew’s session abandonment numbers (26% vs 16%) are a reminder: even when users do click, they may be more “done” than they used to be.

4) Assisted conversions and brand search lift. This is the part stakeholders forget. If AI Overviews reduce clicks, you may still see downstream brand demand. But you only see it if you track it: branded impressions, branded clicks, direct traffic changes, and assisted conversion paths.

Where this falls apart: attribution lag and messy user paths. A user reads an AI Overview, doesn’t click, and later types your brand. Your SEO dashboard calls that “brand,” not “search.” That is technically correct and strategically misleading.

AIO trigger risk forecasting: which queries are most likely to get squeezed

Once measurement is clean, the next question is selfish and practical: which queries are going to hurt?

Pew’s patterns are useful because they map to how people write “ask a question” searches.

  • Queries starting with who/when/why generated AI summaries about 60% of the time.
  • Searches with 10+ words produced AI summaries 53% of the time.
  • Short queries (1 to 2 words) produced summaries 8% of the time.

The common bad advice is “go after long-tail.” Long-tail still matters, but long and question-shaped long-tail is also the exact shape that triggers AI Overviews.

We triage queries into three buckets:

High-risk: question-form, full-sentence, 10+ words, definitional, “best way to,” “why does,” “what is,” anything that can be answered cleanly without fresh data.

Medium-risk: comparison queries where the user might still want a table, a calculator, specs, or a real workflow. AI Overviews show up, but they don’t always satisfy.

Lower-risk: branded, navigational, local inventory, product-level, and anything where the user needs to do something interactive or transaction-bound.

Treating all informational queries the same is how you end up spending months rewriting “what is X” pages that are structurally doomed to be summarized.

Visibility without visits: citations in AI Overviews are not a traffic plan

We’ve sat in enough stakeholder meetings to know the emotional arc. Someone says, “Good news, we’re cited in the AI Overview.” Someone else says, “So we’re safe.” Then next month the traffic is still down.

Pew found users click links inside the AI summary about 1% of the time. That’s the reality check. If your business model needs visits, a citation is not a visit.

What citations are actually good for in 2026:

They are brand reinforcement. They are credibility signals. They can be the first touch in a user’s memory, even if they don’t click. They can also be a defensive play: if your competitor is cited and you are not, you’re giving away mindshare even before the user sees the classic results.

But if your reporting celebrates “AIO inclusion” while sessions and revenue fall, you’re telling a comforting story, not a true one.

SERP real estate strategy for the impact of AI on search click-through rates

The second hard part is tactical: what do you do when CTR collapses because the SERP changed?

A lot of commentary gets stuck at “try to be included in the overview.” That’s not enough. AI Overviews are often placed at the top, but that dominance is not absolute.

seoClarity data reported by Digiday showed AI Overviews appearing first 98% in May 2025, dropping to 87.6% in June 2025. And the share appearing below position #1 increased from 0.4% to 2.4% in the same period, a big relative jump even if the absolute number is still small.

Small windows matter. That’s where you can still win clicks.

A decision tree we actually use: keyed to AIO placement

If the AI Overview is pinned at the very top and dominates the viewport, we assume CTR will be structurally worse. We focus on formats the overview can’t fully replace.

If the AI Overview is present but not first, or it’s visually compressed, we treat it like a new competitor snippet. Then we fight for the top organic slot, featured snippet alternatives, and other SERP features that still pull the eye.

If the AI Overview is absent, it’s “classic SEO day.” We still care about snippets and titles, but we don’t redesign the whole content strategy around an absent feature.

You need to track “AIO demotion events” explicitly: queries where the AIO drops below #1 or disappears. We flag them and move fast, because those windows don’t stay open forever.

Page-type mapping: pick battles the SERP can’t instantly summarize

This is the part that feels unfair. The pages that used to print money were often the most summarizable.

We map queries to page types that resist summarization:

  • Definition pages are easy for AI to compress. You can still rank, but the click ceiling is lower.
  • Comparisons can still earn clicks if you have a real rubric, real testing, or data the user can interact with.
  • Tools, calculators, templates, and checklists often keep clicks because users need an artifact, not an explanation.
  • Fresh data and original reporting still pull users off the SERP because the SERP cannot safely invent the latest numbers without risk.
  • Local or product inventory queries often require specifics that change, and users want to verify.

The catch: people try to “tool-ify” everything. A useless calculator does not save you. We’ve seen teams ship a template page that’s basically a blog post with a PDF gate, then wonder why nobody uses it. The format has to genuinely reduce effort.

Click-capture formats that AI summaries don’t satisfy well

We keep one short checklist on the wall because it stops unproductive debates. If we can offer one of these, we usually have a fighting chance even with AI Overviews present:

  • An interactive tool that produces a personalized result, not just static guidance.
  • A downloadable template people can apply immediately, ideally without a form wall.
  • A dataset with a timestamp, methodology, and a clear update cadence.
  • A workflow with screenshots from the current UI, because interfaces change and AI summaries get vague.
  • A local or inventory-specific view that depends on the user’s context.

That’s it. If your page is “a clean explanation,” assume the SERP will eat it.

Rapid snippet testing when AIO appears below #1

When we see AIO below the top slot, we treat that query like it’s temporarily back in 2019 and we get aggressive.

We rewrite titles to be mechanically specific, not clever. We tighten the first paragraph into a direct answer that matches the query language. We add structured markup where it actually changes eligibility. Then we watch the SERP.

Honestly, this step took us three tries to get right on one project because Google kept flipping between two title variants it was generating on its own. We stopped trying to “win the title” and started testing for the title Google would actually display.

The branded-query exception: brand demand as a CTR hedge

Most SEO teams treat brand as “someone else’s job.” That separation is a luxury you don’t have in a lower-click SERP.

Amsive data cited by Search Engine Land found AI Overviews triggered on branded keywords 4.79% of the time, and when present, branded queries saw an 18.68% CTR boost.

Two takeaways:

First, branded queries are less likely to get an AI Overview. That alone makes brand demand a defensive asset.

Second, when an overview does appear on a branded query, users may still click because they already know what they want. The query is navigational in disguise.

So we bake brand into the search plan in unglamorous ways: tighten knowledge panels and sitelinks by cleaning up site architecture, publish comparison pages that explicitly include the brand name in the framing users adopt, and create repeat-visit loops so people come back by typing us in, not rediscovering us.

We’ve watched this work. It’s slower than “rank for a keyword,” but it doesn’t disappear when a SERP feature changes.

Business adaptation when clicks are the scarce resource

If you assume you can “SEO harder” your way out, you’ll burn a year.

Even in the best case, some query classes will send fewer visits because users end sessions on the SERP. Pew’s session-ending delta (26% vs 16%) is the warning label.

We shift effort into extracting more value from each visit: fix the pages that bleed conversions, reduce steps, make the next action obvious, and stop hiding the good stuff behind “request a demo” when the user is still figuring out what the product is.

Retention becomes a search strategy. Email, app, bookmarks, community, anything that turns one visit into five. Not glamorous. Profitable.

A quick aside: we once spent an entire sprint debating a hero headline while the checkout page was timing out on mobile. That’s the kind of self-inflicted wound that feels like “AI took our traffic” until you look closely.

Risk, controls, and second-order effects nobody wants to own

A few things are now intertwined, whether you like it or not.

Google has started putting AI-generated summaries into Discover feeds in the mobile app in the US, replacing headlines with summaries and small publisher logos, with an AI disclaimer. That changes the click incentive in a place publishers used to rely on for spikes.

Some publishers are turning to crawler controls, including Cloudflare-style controls, to manage bot access. Google also talks about “open web protocols” and site controls for inclusion in Search AI features. The annoying reality: blocking can protect content, but it can also shrink discovery. You don’t want to find out after the fact that your “defense” killed your inclusion in places that still drove meaningful demand.

Licensing is the other branch. Google has been reported to be working with about 20 national outlets on licensing pilots, and deals like Google-Reddit’s reported $60M agreement set expectations. Most sites will not get a deal. That’s not pessimism, it’s math.

One final caution: month-to-month charts are seductive. Digiday and Similarweb both note the environment is shifting fast, and Google will always have plausible alternative explanations like seasonality or algorithm updates. Sometimes they’re right. We’ve seen teams overreact to a single-month drop, rewrite everything, and then realize the AIO trigger rate simply changed back.

If you take anything from our bruises, take this: segment by AI Overview presence, forecast risk by query shape, and build assets that deserve a click even when the SERP tries to end the session early.

FAQ

Why are our rankings stable but clicks and CTR dropping?

AI Overviews can satisfy informational intent directly on the SERP, so users do not need to click blue links. Your average position can stay similar while CTR drops because the click opportunity is smaller.

How do we measure the impact of AI Overviews on CTR in Google Search Console?

GSC does not provide an AI Overview filter, so you need to annotate queries with AI Overview presence using a SERP feature dataset or your own SERP capture. Then compare the same query when AI Overviews are present versus absent.

Do citations in AI Overviews drive meaningful traffic?

Usually not. External links inside AI summaries tend to get clicked about 1% of the time, so citations are better treated as visibility and credibility signals than a reliable traffic source.

Which keywords are most likely to lose clicks because of AI Overviews?

Question-shaped and long informational queries are the highest risk, especially definitional searches that can be answered without fresh data. Branded, navigational, local, and transaction-oriented queries are generally lower risk.