Repeatable experimentation workflows in e-commerce

Year
2025
Industries
B2C

The challenge

A large retailer wanted a faster way to validate UX and feature changes on its e-commerce platforms. The team needed measurable proof before rolling out updates across two brands operating at scale. The goal was to shorten the decision cycle, cut guesswork, and quantify the commercial upside of each change.

The internal architecture at the client already supported short feedback cycles. A/B testing fit naturally into this setup since it allowed controlled experiments in production with real customers.

Approach

We supported the client with a structured A/B testing framework, covering analysis, test design, build, rollout, and evaluation.

Process

  1. Review the current experience
    We examined the existing version, competitor patterns, and pain points together with the business team.
  2. Form a measurable assumption
    Each test started with a clear hypothesis for the alternative variant.
  3. Develop variant B
    Our team built the feature or UI change.
  4. Configure the test in LaunchDarkly
    Feature flags controlled exposure. Users were randomly allocated to variant A or B.
  5. Run the test
    The website displayed the assigned version for each user.
  6. Measure user behaviour
    GA4 and Tag Manager tracked interactions, conversions, and guardrail metrics.
  7. Evaluate results
    The test ended once statistical confidence was reached.
  8. Roll out or revert
    The winning version went live for all users. A losing variant provided insight for future iterations.

What we tested

The experimentation track focused on elements that shape product discovery and decision-making, including:

  • Product reviews
  • Sticky filters
  • Availability toggle
  • Top subcategories
  • Product card layout variations

Each area was selected because it plays a clear role in how customers move through the buying journey.

Results

The full performance data from these experiments is confidential. What can be shared is that several tests delivered a meaningful lift in key metrics, including conversion. The gains were strong enough to justify rollout decisions at scale.

Sticky filters

The sticky filter test produced one of the most convincing outcomes in the programme. While specific numbers cannot be disclosed, the improvement in usage and downstream interaction rates was large enough for the retailer to approve a full implementation across both brands.

Product card view

The redesigned card layout also showed a measurable step forward. Users interacted more often with the updated version and moved more effectively through the category flow. Precise figures remain confidential, but the uplift was significant in both engagement and conversion behaviour.

 

Key learnings

Long-term lessons for the organisation’s experimentation culture :

  • Robust setup is essential for valid outcomes.
  • Testing multiple variants helps uncover deeper behavioural patterns.
  • Intuitive ideas can still lose when tested with real shoppers.
  • Incremental testing limits risk and isolates impact.
  • Continuous experimentation creates compounding improvement.
  • Even failed variants offer useful insight.

Impact

The programme established a repeatable experimentation workflow inside the client’s digital teams. It created a decision model based on data, not assumptions. Individual experiments generated measurable revenue uplift, and the organisation gained a scalable approach for future optimisation.

Tags