INP Explained: What Core Web Vitals Measure Now

Two years after replacing FID, Interaction to Next Paint has settled in. Here's what actually matters for your rankings, your users, and your optimization workflow.

Technical SEOCore Web VitalsINPWeb PerformanceTechnical SEO

Core Web Vitals still include three metrics: LCP for loading, CLS for visual stability, and INP for responsiveness. INP replaced First Input Delay in March 2024, and nothing about that swap has changed since. If you've been waiting for Google to tighten thresholds or increase ranking weight, stop. Google is shipping tooling improvements, not policy changes.

What has changed is how well we can actually find and fix the problems INP surfaces.

Every interaction counts, not just the first tap

FID measured how long the browser took to start processing your page's very first interaction. Click a button on load, get a number. INP measures the latency of every click, tap, and keyboard press across the entire session, then reports the worst one—with a small outlier adjustment of 1 interaction per 50. Scrolling and hovering don't count.

Each interaction has three phases: input delay (the CPU is busy when the user acts), processing time (your event handlers running, plus any rendering they trigger), and presentation delay (the gap until the browser paints the next frame). The good threshold is 200ms total across all three. Above 500ms is flagged as poor.

Most teams don't realize INP uses two different percentiles simultaneously. The metric is reported at the 75th percentile of your users for pass/fail assessment in Search Console, but it's measured at the 98th percentile of interactions per session. Google picks your near-worst interaction per visit, then checks whether 75% of your users still come in under 200ms on that moment. It's deliberately unforgiving. You can't pass by having fast buttons if your nav menu locks up.

INP's ranking impact is smaller than you think

Here's where the conversation usually goes sideways. Teams either panic about Core Web Vitals or dismiss them entirely. The data says something more boring and more useful than either camp admits.

Core Web Vitals function as a tie-breaker. DebugBear's analysis found position 1 results show only about 10% higher pass rates than position 9.

That's statistically significant but not dominant. Google's own documentation describes them as "components of broader page experience factors" that align with what core ranking systems reward. Content relevance still runs the show.

Two things: there are gradual benefits before reaching the "good" thresholds, so partial improvement isn't wasted. But there's no bonus for going beyond "good." Hitting 150ms on INP doesn't rank better than 195ms.

Passing requires 75% of your users experiencing "good" performance across all three Core Web Vitals simultaneously.

That 75% number trips people up. A page can have a median INP of 120ms and still fail if a quarter of visitors on slower devices push past 200ms.

Third-party scripts are the problem; two new APIs are the fix

The pattern showing up in production is consistent. H&M's menu interaction clocking 303ms and Wales Online's consent button hitting 382ms point to the same root cause: third-party scripts and JavaScript-heavy client-side architectures monopolizing the main thread. Most INP failures aren't from your code. They're from your analytics stack, your consent manager, and your ad scripts all fighting for the same thread.

The two APIs Chrome shipped matter more than the metric itself.

The Long Animation Frames API (LoAF, available since Chrome 123) gives you what Long Tasks API never could: script attribution. When an interaction is slow, LoAF tells you which script caused it. Before LoAF, you knew something was blocking; now you know it was your Hotjar snippet or your A/B testing framework. That specificity turns a vague "fix your INP" into a concrete conversation with your third-party vendors.

scheduler.yield() solves the other half. Long tasks block the main thread because JavaScript runs to completion. The old workaround was setTimeout, which shoves your remaining work to the back of the task queue. scheduler.yield() breaks a long task into chunks while preserving priority, so the browser can handle the pending interaction and resume your code where it left off. It's a single line: await scheduler.yield() between work segments.

For React apps specifically, unnecessary re-renders are a major processing-time contributor. React.memo() and proper memoization hygiene matter, but the architectural question is bigger: are you rendering this component on the server or constructing it client-side? Server-rendered markup that hydrates selectively will almost always have better INP than a fully client-constructed equivalent.

Our read: the optimization story for INP in 2026 isn't about micro-tuning event handlers. It's about two architectural decisions. First, how much JavaScript are you shipping to the main thread? Second, which third-party scripts have you actually audited for interaction cost? Most sites failing INP haven't done either exercise. The tools to diagnose are finally good enough; the discipline to act on the findings is the bottleneck.

Frequently Asked Questions