A/B Testing: Improve Online Experiences and Business Outcomes

August 8, 2023

One of the exciting things about being a data-driven business is that rather than relying on instincts or endless debates to make creative and content choices, we have some awesome tools for unlocking the value of data and analytics to quickly make evidence-based decisions. This level of speed to insight means that we can plan, develop, test, evaluate and iterate online visitor experiences in real or near-real time.

The most common conversion rate optimization (CRO) technique that helps digital marketers make data-based decisions is A/B testing. It solves the crucial CRO question: How do I know what’s driving my conversion rates up or down? Answer: By understanding how the different elements of online content assets (website, mobile apps, landing pages, forms, ads, etc.) impact your audience’s behavior.

While the science may sound a bit intimidating, A/B testing is actually one of the simplest, most powerful and cost-effective forms of a randomized controlled experiment that can work for any business seeking to increase conversions, leads and ultimately, sales.

What is A/B testing?

A/B testing, also known as split testing, is the experimental process where two versions of a content asset are compared to determine which one performed better based on audience behavior over a specific time period (long enough to make accurate conclusions about the results). As with most properly conducted experiments involving statistical inference, A/B tests require a hypothesis, a control, a variation, a test group and a statistically calculated result.

Let’s review each one of these requirements.

Hypothesis – what to test and why?

Begin with a strong value connection to either a customer-centered business problem or an overarching marketing strategy, with performance metrics, that is aligned with a relevant business strategy or goal. This will help you prioritize what to test – specific to the degree of quantifiable impact it will have on achieving stated goals.

Discovery questions include: What’s the problem we’re trying to solve and its root cause? Can it be solved by assessing existing data and analytics – online and offline, qualitative and quantitative? Have we tried something similar during this or another stage in our conversion funnel?

As you explore these questions with your team, you’ll determine what you should be testing (at the element, page or visitor flow level) and develop the hypothesis – a three-part, informed prediction for what will happen when this element change is tested. For example:

“If [what’s being changed], then [what you predict the significant reaction to the change will be], because [the rationale for your prediction].”

Focus on the big picture but with small, incremental changes. It’s amazing how minor changes can drive major lifts in your conversion rates.

The control, variation and test group 

Version A (control) is tested in parallel against Version B (challenger), the single change variation of Version A. The randomly selected test group (audience) is equally split so that one half is shown Version A, and the other half sees Version B.

You can also sequentially test several single change variants (A/B/n) using this same process, as long as the number of participants or observations for your experiments is large enough to ensure that your results are statistically significant (reliable) at the industry standard of 95% confidence level with a 5% margin of error. The bigger the test audience the lesser the margin of error. Equally important, let your tests play out until you have sufficient traffic to avoid false positives/negatives.

Evaluate results

After confirming that all components of your test worked as intended and your results are accurate, document which version performed best and why. If the results prove or disprove your hypothesis, capture and share the results along with key learnings. Continue using the winning version, which becomes your control, and create a new hypothesis to test and refine your efforts. You’re now testing against your own excellence!

Stop guessing, start testing

A/B testing is a perpetual journey of testing, learning and iterating – which why it’s nicknamed “Always Be Testing.” It requires patience and discipline to understand your visitors, their behaviors and elements that influence their behavior change, but the rewards are worth the effort. So, remove the guesswork of determining what works and doesn’t work for your business by making it a vital component of your digital marketing strategy to optimize online engagement and conversions.

Susan Landers
Director, Demand Generation

Susan Landers is responsible for Konica Minolta’s Demand Generation for North America. She leads a team of digital marketers that develop go-to-market strategies, work in cross-functional teams and implement integrated campaigns to support optimization and revenue growth. Susan has extensive experience in all facets of the business and is passionate about identifying market opportunities through data and bringing together teams to drive growth. Since joining Konica Minolta twelve years ago, she has been influential in driving digital engagement, implementing marketing technology, and building, growing, and inspiring teams.