Your winning A/B test has an expiry date

21/4/2026

Your winning A/B test has an expiry date - Polaris Growth CRO blog
Conversion optimization
Author:
Omar Lovert
published on:
21
April
,
2026
| updated on:
21
April
,
2026

You ran a test, it won, you shipped it, and conversion went up. Everyone was happy.

Six months later, someone pulls up the numbers and asks: "Why isn't this working as well anymore?"

It's not a fluke. It's one of the least talked about realities in CRO: winning tests normalize over time. Not all of them, and not at the same rate, but enough that any serious optimization program needs a plan for it.

If your CRO strategy is "find a winner and move on," you're leaving compounding gains on the table.

 

The win isn't permanent. Here's why.

There's a natural assumption that once a test wins, the improvement is locked in. But conversion lifts aren't static. They're tied to how users respond to a specific change in a specific context, and both of those things shift over time.

Think about urgency messaging. A countdown timer or a low-stock indicator can produce a meaningful uplift when it's first introduced. Users respond to it because it's new, it creates pressure, and it accelerates decisions. But returning customers start to tune it out. They've seen the "only 3 left" message before. The psychological trigger weakens with repeated exposure.

This isn't limited to urgency tactics. Seasonal relevance changes. Your audience mix shifts as marketing spend moves between channels. Competitors adopt similar approaches. The context that made a test win in February may not exist in August.

Not every test decays at the same rate. A change that reorganizes product information into a clearer, more scannable format tends to hold its value far longer than a change built around a momentary psychological trigger.

The reason is straightforward: structural improvements help users do what they were already trying to do. That need doesn't go away. Tactical nudges, on the other hand, rely on novelty, and novelty fades.

 

Catching decay early (and what to do about it)

Most teams don't track past winners because they don't think they need to. The test won, it was shipped, and the backlog moved on. But if you want to stay ahead of decay, you need two things: a simple monitoring habit and a willingness to reverse test.

On the monitoring side: keep a log of every test you've shipped as a permanent change. Include the date, the metric it moved, and the size of the uplift. Every quarter, check whether those metrics are still trending where you'd expect. Factor in seasonality and traffic mix shifts, but a consistent downward drift is a signal worth investigating.

On the testing side: one of the most underused moves in CRO is the reverse test. Instead of only testing new ideas forward, you go back and test whether removing a previous winner causes a measurable drop.

  • If removing the change causes a meaningful drop, the original win is still earning its keep.
  • If there's no significant difference, the element has run its course and you've freed up page real estate for something with more potential.
  • If the drop is smaller than the original uplift (say, 5% down versus the original 10% up), that's normal. It doesn't mean the result was wrong. It means the context has shifted and the marginal value has decreased.

 

Getting more from a winner before it fades

The best CRO programs don't just ship winners and move on. They treat a winning test as the starting point for a new round of iteration. Once you know a change works, the question becomes: can we push it further?

We've seen this play out clearly with homepage layout tests. In one case, data showed that users were consistently clicking on product categories in the second fold while ignoring the promotional banner above them. The logical next step was to move categories higher, but the stakeholder wasn't ready to drop the banner entirely. So the team iterated:

  • Iteration 1: A split layout with the banner alongside the categories. Data showed users engaging with the categories far more than the banner.
  • Iteration 2: Categories moved higher, banner pushed down. Engagement increased again.
  • Iteration 3: Categories led the page entirely, producing the highest uplift in the full testing sequence.

If the team had stopped after the first win, they would have captured maybe a third of the total improvement available.

When you do find a winner that's losing its edge, don't treat it as a failure. Treat it as a signal. The underlying user behavior has shifted, and you now have a fresh optimization opportunity in the same spot that already proved it can move the needle.

 

The compound effect

CRO programs that only chase new tests are running on a treadmill. You're adding wins, but the old ones are quietly eroding behind you.

Programs that actively manage their portfolio of implemented changes, iterating on winners, reverse testing assumptions, and extracting full value from every successful variation, are the ones that compound growth over time.

The goal isn't just to find the next winning test. It's to make sure the ones you've already found are still working for you.

Want to know which of your past CRO wins are still performing?

We run free CRO audits for ecommerce brands, where we assess your current site experience, identify where previous optimizations may have lost their edge, and give you a prioritized roadmap for what to test next.

Book a call here to get a clear starting point.


Polaris Growth

Want to know more?

Get in touch