How to Use A/B Testing to Improve Conversions

As you move through increment product cycles and gather more validated learning, the information you collect won’t always be precise. This is especially true if your information comes from qualitative sources like Net Promoter Score surveys (although it’s possible to turn NPS surveys into quantitative data; see the “Categorization” subheading on this page.)

For instance, let’s say you’ve learned of a large group of customers who complain that your SaaS tool lacks a reporting feature… even though you have one. You suspect this might be the case because the reporting feature link isn’t located in a clear place.

You could change your app so the menu links are more prominent, but how would you know if your solution satisfied your customers? Well, you could survey them again to see if they like the new design, but that would take time and more participation from your customers. (They won’t answer endless surveys, so you have to make your questions count.)

In this case, the solution is A/B testing (or sometimes called split testing).

Download this free list of web/app elements you should split test first.

What Is A/B testing?

A/B testing is comparing two versions of a web page, application screen, or element to determine which performs better. You compare them by showing half of your user base one version and half of your user base the other. Whichever group performs closer to your desired outcome is the option you should implement for all of your users.

Using our example, you would show 50% of your users the current design with the hard-to-find menu, and the other 50% would see some new design that you feel would address the problem (in this case, a clearer menu). If more people click on the reporting feature with your new design, you could conclude that the design is superior to the old version and would make it available to everyone.

Image: RaHuL Rodriguez/ Flickr

The beauty of A/B testing is that you can test any part of your application if you have some customers using it. Furthermore, the data you get back is entirely actionable.

In many cases, developers and marketers A/B test elements of applications even when they don’t have any insights. No one complains about button colors, but marketers and developers will still test several versions of buttons to see which ones draw the most clicks.

An A/B test, however, takes time. You need to build the variation and give it time to run so data can be collected. Once you have a result, you may want to run another test on the same element or page to further refine it. “If you test your hypothesis only once, you’re doing it wrong,” says Peep Laja at Conversion XL.

So if you’re looking to make fast changes to your SaaS tool, you need multivariate testing.

What is multivariate testing?

Multivariate testing is A/B testing at scale. It allows you to test multiple variables at one time to speed up your testing cycles.

In a multivariate testing, the variables you’re testing are arranged in all possible combinations are displayed to a portion of the website or app’s traffic.

Optimizely explains it well: “A multivariate test will change multiple elements, like changing a picture and headline at the same time. Three variations of the image and two variations of the headline are combined to create six versions of the content, which are tested concurrently to find the winning variation.”

The benefit of multivariate testing is that you can optimize a bunch of elements at one time to quickly improve a web page or application screen. Think of it like running multiple iterative cycles at one time. Smashing Magazine has an excellent post on how to do it properly.

However, there’s a challenge here. In order to run a multivariate test, you need enough traffic so each variation receives enough visitors to collect reliable data.

A website that receives a million views a month will have no problem running six variations. Each group is still enough data to make confident conclusions. But a website that only gets 1,000 views a month won’t provide a sample size for each variation that’s large enough to reach a statistically significant result.

Split testing can’t lead product development

While split testing techniques like A/B testing and multivariate testing are excellent ways to optimize a product, they aren’t sufficient tools to lead your iterative development.

First of all, split testing can only return information if you have people using the product or feature. If you introduce a new feature, you’ll have no baseline to compare it to. So A/B or multivariate testing can’t directly help you generate new features or improvements (but the learning you obtain might spark some ideas).

Second, split testing won’t tell you if users dislike a feature or its implementation. If 0% of your customers use Feature X, does that mean they don’t like its placement, position, colors, copy, or location? If all of your tests return 0% usage, it might mean they just don’t find value in the feature at all. There’s no way to know for sure unless you collect data another way or repeatedly tweak the feature over and over until people start using it (which is not an efficient plan).

Josh Hannah, venture capitalist and entrepreneur, doesn’t think split testing is a substitute for product vision. “Human beings are probably the ultimate result of great product development through split testing – evolution is one giant iterative experiment,” he says. “Unfortunately for Internet startups it took millions of years for life as we know it to develop, and even if we reduce the test cycle time a few orders of magnitude, the “development cycle” is beyond our comprehension.”

Learning from the results

Whether the split tests achieved your desired results or not, you should always learn something. Sometimes that learning is as simple as “Our users don’t like this,” but often you’ll be able to surmise some type of reason that explains why performance was undesirable.

Going back to our earlier example about the hard-to-find reporting feature, we can conclude that since usage of the feature increased when a change was made that more people were able to find it. Keep in mind that our initial insight came from survey complaints, so we would continue to monitor surveys to make sure the problem was solved.

As you complete tests, use past learning to influence future testing. For example, if your split testing teaches you that your customers are primarily concerned with three particular metrics on your app’s dashboard, you should ask yourself why those metrics are so important. The conclusion you draw could lead to a better understanding of your customer that influences your other development decisions.

Free download: 54 Website/App Elements You Should A/B Test Immediately

Getting started

If you’re bootstrapping and don’t have an in-house developer who can create multiple versions of the same web pages or app screens, you’ll need a third-party tool to start A/B and multivariate testing. Some great tools are Optimizely, VWO, and Google Experiments.

Gathering information from customers about what they like and don’t like about your application is why we created Ask Inline. It automates the hard part of collecting feedback. Get started for free.

Leave a Reply

Your email address will not be published. Required fields are marked *