A page gains 200 new keyword rankings. Average position drops. CTR collapses. Impressions spike. Every one of those signals looks like a problem, and every one of them is actually fine. Google Search Console is doing exactly what it was designed to do. The trouble is that the SERP it was designed for barely exists anymore.
Most SEOs check GSC daily and misread it weekly. Not because the data is wrong, but because the aggregation logic, the metric definitions, and the benchmarks we've all internalized were built for a cleaner, simpler search results page. In 2026, with AI Overviews reshaping click behavior and multi-URL rankings complicating impression counts, the gap between what GSC shows and what's actually happening has become a real operational problem.
Two dashboards, two realities
Here's a detail that trips up even experienced practitioners: GSC runs two completely different aggregation systems depending on which tab you're looking at.
Property-level aggregation (used in the main Performance charts) treats your entire domain as one entity. If three of your URLs rank for "buy flowers," that counts as 1 impression. URL-level aggregation (used in the Pages tab and Search Appearance) counts each URL separately, so the same scenario produces 3 impressions.
That's not a rounding difference. It's a 200% discrepancy that completely changes your CTR calculation. Under property aggregation, one click on one of those three URLs gives you 100% CTR. Under URL aggregation, the same click produces 33% CTR per URL.
If you've ever pulled CTR data from the Performance overview, then cross-referenced it against the Pages tab and wondered why the numbers didn't match — this is why. The Performance charts use property-level logic; the Pages tab uses URL-level. They're answering different questions with different math, and GSC doesn't flag the switch.
The practical implication, as Anash Chenkov explains: property aggregation tells you how your domain performs against a query. URL aggregation tells you which specific pages are underperforming. Both are useful. Confusing them gives you nonsense.
This discrepancy compounds when you're tracking performance across multiple report views or exporting data for analysis. The aggregation method determines not just what you see, but what conclusions you can reasonably draw from it.
Average position is the most dangerous metric in your dashboard
"Average position" sounds straightforward.
It isn't.
Analytics Edge calls it a potentially "meaningless metric," and that's not hyperbole. The problem is mathematical. Average position aggregates across every impression where your page appeared, including all the SERP variations, device types, and personalization differences that produce wildly different rankings for the same query. A page ranking #2 for 100 searches and #47 for 10 searches reports an average position of ~6. That number describes neither reality accurately.
Average position almost always declines when a page starts performing better. When you begin ranking for new, more competitive queries where you sit at position 15 or 20, those lower positions drag the average down even though the underlying performance is positive. A content piece attracting new search interest looks, in GSC's average position metric, like it's losing ground.
This is the metric inversion that catches teams off guard during reporting. Stakeholders see position going up (numerically, meaning worse) and assume something broke. What actually happened is the page expanded its query footprint — which is exactly what you want.
The CTR collapse nobody satisfactorily explained
The 2026 SERP has introduced a new layer of measurement chaos. According to Dataslayer's analysis, organic CTR dropped 61% for queries where AI Overviews appear (from 1.76% to 0.61%). Even queries without AI Overviews saw a 41% CTR reduction, suggesting broader behavioral shifts.
The measurement problem goes deeper than falling click rates. AI Overviews generate their own impression events, which inflate your impression counts by 27–49% without corresponding clicks. One documented case showed impressions up 27.56% year-over-year while clicks fell 36.18%.
This is the "impressions up, clicks down" paradox that's been confusing reporting across the industry. The numbers aren't wrong. Google is counting an AIO appearance as an impression (which, by their definition, it is). But GSC provides no filter to separate AI Overview impressions from traditional organic impressions, making it impossible to attribute performance changes to actual ranking shifts versus AIO expansion.
Your historical CTR benchmarks assumed a SERP that barely exists anymore. Position #1 could mean 46.9% CTR with sitelinks or 0.64% with an AI Overview sitting above it. That's a 7,200% variance for the same reported position.
The standard "#1 = 27% CTR, #2 = 12%" figures that still circulate in pitch decks assumed clean, ten-blue-links SERPs. Those numbers were always averages; now they're actively misleading.
Seotistics documents a detail that should change how you segment: branded keywords produce CTRs 9–10x higher than non-branded queries. If your reporting mixes both, your "average CTR" is a fiction. A site with heavy brand search volume will show CTRs that make non-branded performance look strong by association. Strip out branded queries and the picture usually changes significantly.
Our read: CTR as a standalone KPI is dead for most reporting purposes. It needs to be segmented by brand vs. non-brand, filtered by SERP feature presence, and interpreted alongside click and impression trends rather than against static benchmarks.
Reading the combinations
The individual metrics mislead. The combinations still tell you something.
Seotistics' framework offers a useful starting point: low clicks with high impressions suggests a title and meta description problem (you're showing up but not getting picked). Growing clicks with declining impressions is actually a positive signal — it means you're converting a higher share of a potentially smaller but more qualified impression pool.
The instinct to optimize each metric independently is the core mistake. Impressions without click context are vanity. Position without query-count context is noise. CTR without SERP-feature context is a guess.
What works: build reporting that tracks these metrics as ratios and trends over time, segmented by query type. Watch for the divergence patterns — impressions up / clicks down, position declining / traffic growing — and investigate them as signals rather than treating them as inherently good or bad.
GSC remains the best first-party search data most sites will ever get. But the interpretive layer between the dashboard and your decisions needs a full rebuild for 2026's reality. The tool hasn't broken. The assumptions baked into how we read it have.