What is A/B testing?

A/B testing definition

A/B testing—also called split testing or bucket testing—compares the performance of two versions of content to see which one appeals more to visitors/viewers. It tests a control (A) version against a variant (B) version to measure which one is most successful based on your key metrics. As a digital marketing practitioner doing either B2B marketing or B2C marketing, your options for conducting A/B tests include:

  • Website A/B testing (copy, images, colors designs, calls to action), which splits traffic between two versions—A and B. You monitor visitor actions to identify which version yields the highest number of 1) conversions or 2) visitors who performed the desired action.
  • Email marketing A/B testing (subject line, images, calls to action), which splits recipients into two segments to determine which version generates a higher open rate.
  • Content selected by editors or content selected by an algorithm based on user behavior to see which one results in more engagement.

Regardless of the focus, A/B testing helps you determine how to provide the best customer experience (CX).

In addition to A/B tests, there are also A/B/N tests, where the "N" stands for "unknown". An A/B/N test is a type with more than two variations.

When and why you should A/B test

A/B testing provides the most benefits when it operates continuously. A regular flow of tests can deliver a stream of recommendations on how to fine-tune performance. And continuous testing is possible because the available options for testing are nearly unlimited.

As noted above, A/B testing can be used to evaluate just about any digital marketing asset including:

  • emails
  • newsletters
  • advertisements
  • text messages
  • website pages
  • components on web pages
  • mobile apps

A/B testing plays an important role in campaign management since it helps determine what is and isn’t working. It shows what your audience is interested in and responds to. A/B testing can help you see which element of your marketing strategy has the biggest impact, which one needs improvement, and which one needs to be dropped altogether.

So now that we’ve talked about why you should A/B test, let’s consider two criteria for when to test.

  • You have a digital marketing campaign or element that’s not performing at optimal levels and—therefore—is not meeting expectations. A/B testing can be used to isolate the performance problem and drive performance higher.
  • You’re about to launch something new (web page, email campaign), and you’re not sure which approach (such as messaging) will perform best. Proactive use of A/B testing will allow you to compare and contrast the performance of two different approaches to identify the better one.

Benefits of running A/B tests on your website

Website A/B testing provides a great way to quantitatively determine the tactics that work best with visitors to your website. You may simply be validating a hunch, or your hunch could be proven wrong. However, there is still an upside because you won’t stick with something that isn’t working. You’ll attract more visitors who will spend more time on your site and click more links.

By testing widely used website components/sections, you can make determinations that improve not only the test page but other similar pages as well.

How do you perform an A/B test?

A/B testing isn’t difficult, but it requires marketers to follow a well-defined process. Here are these nine basic steps:

The fundamental steps to planning and executing an A/B test

  • 1. Measure and review the performance baseline
  • 2. Determine the testing goal using the performance baseline
  • 3. Develop a hypothesis on how your test will boost performance
  • 4. Identify test targets or locations
  • 5. Create the A and B versions to test
  • 6. Utilize a QA tool to validate the setup
  • 7. Execute the test
  • 8. Track and evaluate results using web and testing analytics
  • 9. Apply learnings to improve the customer experience

Following the above steps—with clear goals and a solid hypothesis—will help you avoid common A/B testing mistakes.

Tests will provide data and empirical evidence to help you refine and enhance performance. Using what you’ve learned from A/B testing will help make a bigger impact, design a more engaging customer experience (CX), write more compelling copy, and create more captivating visuals. As you continuously optimize, your marketing strategies will become more effective, increasing ROI and driving more revenue.

A/B testing examples

A list of digital marketing elements that can be tested includes one or more of the items below:

  • Navigation links
  • Calls to action (CTAs)
  • Design/layout
  • Copy
  • Content offer
  • Headline
  • Email subject line
  • Friendly email “from” address
  • Images
  • Social media buttons (or other buttons)
  • Logos and taglines/slogans

Your business goals, performance objectives and baseline, and current marketing campaign mix will help you determine the best candidates to test.

The role of analytics in website A/B testing

Throughout the lifecycle of any A/B test, analytics is at the heart of planning, execution, and performance recommendations.

The development of a test hypothesis requires a strong foundation in analytics. You need to understand current performance and traffic levels. In terms of web analytics (for example), there are some key data points that your analytics system will provide during the planning process, including:

  • Traffic (page views, unique visitors) to the page, component, or other element being reviewed for test scenarios
  • Engagement (time spent, pages per visit, bounce rate)
  • Conversions (clicks, registrations, fallout)
  • Performance trended over time

Without this grounding in analytics, any test scenario or performance assessment will likely be based on personal preferences or impressions. Testing will often prove those assumptions to be incorrect.

Once an A/B test launches, analytics also plays a central role. A dashboard is used to monitor performance metrics in real time, to validate the test is operating as expected, and to respond to any anomalies or unexpected results. This can include stopping the test, making adjustments and restarting, and ensuring performance data reflects any changes as well as the timing of those changes. The performance dashboard helps determine how long to keep the test running and to ensuring that statistical significance is achieved.

After the test has run its course, analytics are the basis for determining next steps. For example, they can be used to decide if the test’s winner becomes the standard presentation on the website page that was tested and whether it becomes an ongoing standard. Marketers should develop a reusable analytics template to convey test results and adapt that template to reflect the specific elements of a given test.

Learn more about email A/B testing

How to interpret A/B test results

It’s important to establish goals while planning a test so you can evaluate the results, determine a winner, and update your marketing campaign and/or website to reflect the winning outcome. In many situations, an audience is pre-segmented with a holdout group that will receive the winning version of a message.

The test results will indicate the success of one element over another based on what you’ve decided to measure, such as:

  • number of visitors
  • open rates
  • click-through rates
  • signups (for newsletters, etc.)
  • subscriptions

During the test, the two elements are monitored until a statistically significant measurement is achieved.

Conversion rates can also be measured in terms of revenue. You might consider sales numbers along with the impact of a change on actual sales revenue. Remember that conversion rates can be captured for any measurable action and are not restricted to ecommerce sites and sales. They can include:

  • sales
  • leads generated/registrations submitted
  • newsletter signups
  • clicks on banner ads
  • time spent on the site

What metrics should you pay attention to when it comes to A/B testing?

The answer to that question depends on your hypothesis and goals. However, you should focus on metrics that indicate how engaged your audience is with your marketing content.

If you are testing a web page, look at the number of unique visitors, return visitors, how much time they are spending on the page, as well as the bounce and exit rates. For email marketing, you will want to see who opens the email and clicks through to your CTAs.

What is multivariate testing? How is it different from A/B testing?

Multivariate testing is often discussed hand-in-hand with A/B testing, so it’s important to understand what multivariate testing is and how it differs from A/B testing. The two are related disciplines, but there are distinct differences.

Multivariate testing tests different content for multiple elements (vs. a single element in A/B testing) across one or more website pages or email marketing campaigns to identify the combination that yields the highest conversion rate.

Multivariate testing applies a statistical model to test combinations of changes that result in an overall winning experience and website optimization. Below are several key traits of multivariate testing:

1

Wide range of elements

Multivariate tests are performed for a range of website/email changes, including all parts of an offer—such as images, text, color, fonts, links, and CTA buttons—along with content and layout for landing pages or processes such as checkout. It is not uncommon for a multivariate test to exceed 50 or more combinations.

2

From hypothesis to results

Multivariate testing starts with a hypothesis regarding content changes that could improve conversion rates. With multivariate testing, content changes can be broken up into multiple individual elements to determine combinations that yield the highest conversion rates. Whether there are slight changes or significant changes to the user experience, either may impact overall results.

3

Conversion rates

Conversion rate is the rate at which visitors perform a desired action, such as clicking on an offer or adding products to their cart. Additional metrics are used to evaluate the test, such as revenue per order or click-through rate. Analytics tell you which combination of changes yielded the best results based upon the conversion rate or uplift in the metrics you defined.

4

Continuous optimization

Since you can define a business goal where the test determines the best experience for visitors in order to achieve your goal, consider the option of letting the software optimize experiences automatically for a test.

Can you run A/B and multivariate tests on iOS and Android apps?

In 2020, mobile apps accounted for $2.9 trillion in ecommerce spend. That number is expected to increase by an additional one trillion by the end of 2021. And the growth extends beyond retail and ecommerce. The mobile share of total online traffic continues to grow much faster than desktop growth, since, in many countries mobile phones are more accessible than laptops. So in more and more cases, an iOS or Android app starts and end the customer’s buying journey. But given the small screen, the cart abandonment rate is higher on mobile (87 percent) vs. desktops/laptops (73 percent).

So ensuring that your mobile experience is optimized is more important than ever, but given the limitations around iOS and Android apps, you need the right tools.

Watch the video below to learn more.

Visitor segmentation and segment clustering in multivariate testing

One experience may not be right for all visitors/recipients. An important benefit of multivariate testing is the ability to identify visitor segments and how they perform/interact with different experiences. For example, you may determine that new visitors prefer a different experience than repeat visitors—and this can garner better overall results. More sophisticated systems will automatically suggest visitor segmentation to reduce the time needed to analyze the test results against hundreds of visitor attributes.

Targeting different experiences for different visitor segments will substantially increase your conversion rates. Target them based on a wealth of visitor attributes—from environmental attributes to behaviors—and include customer attributes from other systems such as your CRM system.

When to A/B test or multivariate test? That is the question.

An A/B test is a great tool, but if there are more than two options that need to be tested to determine the “best experience,” you will likely want to do a multivariate test instead of A/B test.

Tests with more than two options take longer to run and will not reveal anything about interaction between variables on a single page. However, A/B testing is very easy to understand, and it can be a good way to introduce the concepts of website and campaign optimization to skeptics or to show the measurable impact of a design change or tweak.

Multivariate testing is extremely useful for an asset (website page or email) where several elements need to be compared—for example, different combinations of images and catchy titles. However, with more options comes the need for higher traffic. So you don’t want to test everything on the page. When too many page elements change, it leads to an impossibly high number of combinations. For example, running a test on 10 different elements can lead to over three and half million permutations. Most websites and email campaigns would struggle to find the traffic to support that.