How to decide which CRO tests to run (without guessing)

21/4/2026

How to decide which CRO tests to run - Polaris Growth blog on conversion rate optimization and A/B testing
Conversion optimization
Author:
Omar Lovert
published on:
21
April
,
2026
| updated on:
21
April
,
2026

The brands that get the most from CRO aren't the ones with the longest list of ideas. They're the ones with a system: a repeatable way to decide what to test next, why, and in what order.

Most ecommerce teams don't have this. They see something on a competitor's site, read a tip on LinkedIn, get a suggestion internally, and just implement the change. No test. No data. No way to know if it actually helped. Even the ones that do test often pick what to run based on whoever sounds most convincing or shouts the loudest.

That's a problem, because the goal of testing is to grow the brand using data. If you have 400 ideas in a spreadsheet, you need to make sure the one with the highest possible impact is the one you run first. You can download lists with hundreds of test ideas, but more ideas won't improve your results. A process for choosing the right ones will.

Whether you're starting from scratch or trying to bring structure to a messy backlog, here is a step-by-step system you can follow every month to take the guesswork out of what to test next.

 

Step 1: Start where the money leaks

The first filter is not "what's easy to change." It's: where are users dropping off in volume?

Look for pages or steps with:

  • high traffic and high exits
  • high add-to-cart but low checkout start
  • high checkout start but low purchase
  • high mobile traffic but poor mobile conversion

This is the CRO version of fixing the bottleneck first. If checkout is the bottleneck, obsessing over the homepage won't move revenue.

 

Step 2: Talk to your support and helpdesk team

Your analytics will show you where people drop off. Your support team can tell you why people almost didn't buy, or why they had a bad experience after buying.

Check which issues customers complain about most. Look at common questions before purchase (sizing, shipping, returns) and common frustrations after purchase.

Research suggests that over 50% of consumers rarely or never complain about a negative experience. They just leave.

So anything your support team does surface is likely just the tip of the iceberg, and worth paying close attention to. Add these friction points to your testing list. They're some of the highest-signal inputs you'll find.

 

Step 3: Pair quantitative data with what you see

Data tells you where the problem is. It doesn't always tell you why.

Your second input is fast heuristic analysis: a structured walkthrough of the page where you evaluate usability, clarity, and trust based on best practices rather than gut feeling.

  • Is the page cluttered?
  • Is the primary action obvious above the fold?
  • Are you asking for trust before you've earned it?
  • Is key information hidden, missing, or hard to scan?

Tools like Hotjar make this easier. Rage click detection is one of the quickest ways to spot friction points your analytics data won't show you.

If a homepage is broken (too many competing elements, unclear hierarchy), you don't need months of debate. You need a clean hypothesis and a test.

 

Step 4: Use a prioritization score (PIE is enough)

There are multiple prioritization models. You don't need something fancy.

A simple scoring method works because it forces trade-offs. PIE:

  • Potential: how much improvement is realistically available?
  • Impact: if it works, how meaningful is the upside?
  • Ease: how hard is it to launch and learn?

Score each 1 to 10. Multiply or average. Rank the list. Ship the top.

This is how you avoid the two most common testing failures:

  • spending 3 weeks on a low-impact change
  • chasing cool ideas that don't affect the bottleneck

 

Step 5: Plan tests by slots, not by a single queue

Most stores can test multiple areas at once.

  • PDP content and layout (slot 1)
  • Cart trust messaging (slot 2)
  • Checkout friction fix (slot 3)

Instead of one long backlog, maintain a small prioritized list per slot.

Why this works:

  • you don't block your entire program on one dev task
  • you learn faster across the funnel
  • you reduce the "we're waiting on engineering" slowdown

 

Step 6: Write hypotheses that explain behavior

Bad hypothesis: "Make the button green to increase conversion."

Good hypothesis: "If we move key reassurance (shipping and returns) above the add-to-cart, more shoppers will feel safe to commit, increasing add-to-cart rate."

A good hypothesis follows a repeatable formula that forces you to connect every test to a real reason and a measurable outcome. There are various frameworks you could use:

  • Standard template: "Changing [Element] from [A] to [B] will result in [Positive Outcome] because [Data/Insight]."
  • If/Then formula: "If we [change this], then [this metric] will [increase/decrease] because [reasoning]."
  • Problem/Solution focus: "We believe that doing [Solution] for [Audience] will make [Outcome] happen, which we will know by [Data/Metric]."

Pick one format and use it consistently. You're not testing UI, you're testing behavior change.

 

Step 7: Decide success metrics before you launch

Every test should have:

  • A primary metric (e.g. add-to-cart rate)
  • Guardrails (e.g. AOV, refund rate, checkout completion, support tickets)
  • A segment focus (mobile vs desktop, new vs returning, geo)

This avoids false wins that "improve conversion" but hurt profit.

 

Step 8: Set expectations about what wins look like

This is a big one.

A 10% uplift in a controlled test does not mean your whole store's conversion rate jumps by 10%.

It depends on:

  • which page was tested
  • which segment saw it (mobile only? new users only?)
  • what share of revenue flows through that step

A mature CRO program communicates this upfront so stakeholders don't panic at month three.

 

Your monthly CRO routine

CRO gets easier when it becomes routine. If you want a simple operating rhythm:

  1. Pull funnel drop-offs
  2. Identify top 2 friction points (data + heuristics)
  3. Generate 6 to 10 test ideas
  4. Score with PIE
  5. Choose 2 to 4 tests across slots
  6. Define metrics and guardrails
  7. Launch, learn, document
  8. Roll out winners, iterate

 

Want to know where your biggest testing opportunities are?

We run free CRO audits for ecommerce brands. We look at your funnel, flag the highest-impact friction points, and give you a prioritized list of what to test first.

Book a call here to get a clear starting point.


Polaris Growth

Want to know more?

Get in touch