4 Out Of 5 Conversion Experts Prefer A/B Testing

Earlier this month, I presented at the inaugural Conversion Conference organized by landing page expert Tim Ash. Never before have I seen so many conversion optimization thought leaders in one spot: Bryan Eisenberg, Jeffrey Eisenberg, Chris Goward, Lance Loveday, Sandra Niehaus, Jakob Nielsen, Brooks Bell, Jonathan Mendez, Khalid Saleh, Howard Kaplan, Eric Hansen, and many […]

Chat with SearchBot

Earlier this month, I presented at the inaugural Conversion Conference organized by landing page expert Tim Ash. Never before have I seen so many conversion optimization thought leaders in one spot: Bryan Eisenberg, Jeffrey Eisenberg, Chris Goward, Lance Loveday, Sandra Niehaus, Jakob Nielsen, Brooks Bell, Jonathan Mendez, Khalid Saleh, Howard Kaplan, Eric Hansen, and many more.

I was inspired by all of them, but perhaps the most surprising discovery was how many of these conversion optimization pros favor simple A/B testing over more advanced multivariate testing (MVT). My informal survey: 4 out of 5 prefer it. More than that, they passionately champion it.

Most of these experts are tool agnostic—they don’t care about selling MVT or A/B testing technologies. All they care about is delivering sustained conversion rate improvements to their clients. So given complete freedom to use either method, why would so many of them choose A/B testing?

It’s not the size of your test that matters

Surely testing more variations must be better than testing fewer, right? As Lance Loveday of Closed Loop Marketing eloquently put it, “Not if all the variations you test suck.”

Proponents of MVT often boast that they can test thousands of variations of a page. But the important question is: how many of those variations are meaningful? Bryan Eisenberg, arguably the godfather of conversion optimization, rails against such blind slice-and-dice optimization, claiming that it burns people out on testing without delivering results.

The gist of his critique: throwing thousands of combinations against the wall—a mishmash of headlines, subheads, images, and calls to action—isn’t marketing; it’s playing the lottery.

The antidote, according to Lance Loveday is TAGO—Test Among Good Options. “A/B testing harnesses the power of large changes, not just tweaking colors or headlines. We like to start all engagements with A/B testing because that’s where we see the biggest gains.”

What are large changes? Fellow Conversion Science columnist Sandra Niehaus recommends that you start by asking the right questions. What is the business goal? What audience is this for? What is the offer? What is the desired user action? Use these questions to develop hypotheses about how to persuade users, and then craft a handful of compelling alternatives to test those hypotheses.

Such hypothesis-driven A/B testing is more like poker than the lottery. You play your hand deliberately.

An additional benefit of testing a smaller number of alternatives is that you’re able to achieve statistical significance to determine a winner more quickly. This is particularly valuable for sites with relatively low traffic, but even sites with higher traffic appreciate faster results.

Think holistically, not combinatorially

We humans are simply not good at thinking combinatorially. If you give me half a dozen each of headlines, subheads, images, and calls to action, it’s impossible for me to picture the 1,296 (6x6x6x6) different combinations in my head. But if you show me six well-chosen alternative pages in their entirety, I can quickly evaluate them to make sure they’re good.

By “good,” I mean that at the very least the elements on the page make sense together and that there aren’t any nonsensical or brand-damaging combinations accidentally in the mix.

See, to conduct tests that have significant impact on your lift, you want to experiment with bold ideas, which can be quite different from each other. But randomly mixing and matching components of bold ideas can inadvertently spawn a Frankenstein (that’s why MVT is often more effective in a follow-up round, after A/B testing, where the variations of elements are more similar—less risk, but less potential gain too.)

By reviewing pages for an A/B test holistically, you can be bold and safe.

A/B testing lets you be bold not just with individual components, but with the entire page as a whole. You can experiment with different layouts and content structures. Perhaps version A tells its story in pictures, with a series of images, while version B presents its case with bullets and advertorial. That sort of comparison would be almost impossible to concoct with MVT.

A/B testing also makes it easy to manage “matched sets” of elements on a page. For instance, you might test offering respondents a series of two, three or five choices—and you want the headline and body copy to be consistent with the options presented.

Finally, A/B testing lets you go beyond testing individual pages and experiment with multi-page paths. For example, version A might have your entire pitch and call to action on a single landing page, while version B might split the experience into a two-step “conversion path.” This can be one of the most powerful kinds of tests, as it lets you examine the effectiveness of behavioral segmentation—leveraging people’s clickstreams to tailor what they see.

Agility is the key to conversion optimization

However, conversion optimization experts know that there’s an upper bound to what you can do with a single page. Trying to optimize one page to serve the needs of your entire audience limits you to their common denominators—which might be pretty low—the classic dilemma of pleasing nobody by trying to please everyone.

Trying thousands of combinations rarely breaks that ceiling.

The solution is to craft more specific landing pages for your different audience segments for the different contexts in which they arrive. Each one of these more targeted pages can authentically engage with its respondents, speaking directly to their particular needs, desires, and concerns. That’s how you really juice your conversion rate.

But to execute on this strategy (“let a thousand landing pages bloom!”), you can’t get hung up on any one particular page. You want to follow the Pareto principle and get 80% of the benefit of landing page optimization with 20% of the effort. Then you move on to the next page. And then the next one.

The smaller scale of A/B testing tends to suggest a rational boundary for the effort worth investing on any one page. You can generally tell when your iterations have gone from big ideas to small tweaks. If you focus more on the big picture of agile marketing, that’s your cue to move on and explore new territory.

Put another way, be obsessed with testing overall, not obsessed with any one page.

What about the 1-in-5 experts who prefer MVT?

Hey, MVT is an incredibly powerful technique in its own right. As I mentioned earlier, it’s often ideal for follow-up optimization on the winner from an A/B test, once you’ve narrowed the field.

Think of the scale in your doctor’s office, the one with the two sliding weights. Generally, A/B testing is like moving that big 100-pound weight one (or two) notches, while MVT is like moving the smaller 1-pound weight. Sliding that smaller weight around still makes a difference (in both conversion optimization and the prognosis from your doctor).

And there are cases where, if you structure your tests properly, you can harness the combinatorial power of MVT to test a large range of mutually compatible possibilities and uncover the interaction effects and relative influence of individual elements. A/B testing isn’t suited for that type of experimentation at all.

My point is not that you shouldn’t use MVT, but rather that you shouldn’t treat it as inherently better than A/B testing. Because in many circumstances, it’s not—and it can actually hold you back from more important tests. That’s why the pros aren’t embarrassed to profess their love of A/B testing. They love what works.

Remember, it’s not the sophistication of your mathematics that matter, so much as the sophistication of your marketing.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Scott Brinker
Contributor
Scott serves as the VP platform ecosystem at HubSpot. Previously, he was the co-founder and CTO of ion interactive.

Get the must-read newsletter for search marketers.