How To Construct Rational Landing Page Tests

All landing page tests are not created equal. What you test on your pages—and what you learn from those tests—can better inform tactics and strategies throughout your entire marketing program. Here are four kinds of landing page tests that can help you learn about your market. Beware butterflies and magic bullets How much can you […]

Chat with SearchBot

All landing page tests are not created equal. What you test on your pages—and what you learn from those tests—can better inform tactics and strategies throughout your entire marketing program. Here are four kinds of landing page tests that can help you learn about your market.

Beware butterflies and magic bullets

How much can you learn from landing pages?

Some landing page optimization experts will warn you about reading too much into the results of a particular landing page test. There are often multiple factors at play in a given experiment, and it can be difficult to precisely separate the different effects.

It’s a valid point, but taken to an extreme it becomes an argument for the “butterfly effect”—that a butterfly flapping its wings in Thailand might trigger an elaborate chain of events that dramatically alters the outcome of your experiments.

While that’s a fun philosophical debate, it’s not a practical position. As marketers exploring new ideas, we always have to deal with uncertainty—the secret of success is making educated guesses based on empirical—albeit imperfect—information.

On the opposite extreme, other folks claim that there are universal recipes that improve conversion rates in all situations—so-called “magic bullets.” These aren’t general best practices, such as “employ good visual design,” but rather specific formulas such as “use a green background,” “have an image of a smiling person”, and “include three one-line bullets.”

Warning bells should go off in any marketer’s head when they hear such one-size-fits-all recommendations without regard for the particulars of audience, market, or brand.

Rational landing page optimization

The middle ground between those extremes is what I dub the “rational” school of landing page optimization. There are three premises behind this approach:

  • Different audiences, markets, brands, and campaigns have different characteristics
  • All tests are not equal: different kinds of tests reveal different kinds of insights
  • All confounding variables are not equal: some are more controllable, some have more influence.

The first premise dismisses the magic bullet approach. Selling a subscription to a pop music service is different than generating leads for a network storage solution. Even the network storage solution is—or should be—marketed differently to small-medium businesses (SMB) versus large enterprises. They have different desires, pain points, demographics/firmographics, etc.

As you dig deeper, you identify more and more segments in your market that respond to different marketing messages and presentations.

In rational landing page optimization, you embrace such segmentation in your marketing programs. After all, the big advantage of targeted search marketing with matched landing pages is that you can authentically engage different audience segments at the very top of your sales funnel.

This leads to three rules of thumb:

  • Treat each segment as its own experimental space—look for learning within a segment.
  • Don’t try to create “one page to rule them all”—pages are cheap; customers are valuable.
  • Iteratively narrow your segments as long as doing so produces ROI—the digital world often rewards deep segmentation.

With such segmentation in mind, you can then consider four different kinds of tests—trivial, contextual, tactical and strategic—categorized by how much reusable learning they can provide:

Rational Landing Page Optimization

Trivial tests. In this model, tests of different headlines or page colors are mostly trivial as far as reusable learning is concerned. That’s not to say that such elements might not have significant impact on a specific test—I’ve seen headline changes generate 50% lift—but rather that such factors are hard to reliably adapt from one set of circumstances to another.

Contextual tests. It’s at the next level up—with contextual elements—that you can start to form meaningful hypotheses. How much impact do seasonal themes have on your conversion rate? How much does the degree of specificity between the ad and the landing page impact the outcome?

To a certain extent, contextual tests are about discovering what would otherwise be confounding variables and systematically testing them. There’s still circumstantial sensitivity here, but useful and reusable patterns can emerge. For instance, is it worth tailoring your landing pages for seasonal factors?

Tactical tests. Higher yet are experiments to identify winning tactics within a segment. Tactical tests include things such as different offers, different levels of “depth” in the format of the landing experience (one page? a multi-step path? a microsite?), different data collection requirements in forms, etc.

These differences often have economic implications for both the respondent and the marketer—such as trading off the value of collecting additional information with the friction that a longer form imposes on the conversion process. In my experience, these type of tactics—when winning ones have been discovered—have a relatively high degree of portability from one landing page to another, at least within a segment for a particular company. You can learn what works here.

Strategic tests. At the highest level are strategic tests to identify new audience segments, the overarching value proposition for each segment, and the granularity of sub-segments within them. In multi-step landing pages, these tests may be conducted with different segmentation choices.

Often, however, strategic testing is about determining how many completely separate pages are optimal within a marketing program, each matched to a different slice of the audience. Honing in on new segments is possibly the greatest payoff from structured landing page testing, as those insights are useful not just in future landing pages but in other marketing vehicles as well.

If you disagree with what I’ve put in each category, feel free to adjust them to your own hypotheses—perhaps in your case color is a tactical choice? My overarching point is to construct tests with these different learning objectives in mind.

Measuring success

With a nod to the butterfly effect folks, it’s true that the reusable learning from these tests is hard to quantify precisely. However, rigorous dissection is not really your goal.

In rational landing page optimization, I would assert the following:

  • Most tests are hypothesis-driven—you’re testing an idea from the start, not trying to fit explanations to the data after the fact. This is an important distinction.
  • Especially with tactical and strategic tests, the number of simultaneous elements varied in any one test should be minimized, reducing interaction effects.
  • Ideas that are believed to be applicable from one landing page to another will, by that belief, be tested repeatedly in a variety of circumstances; if they continue to correlate with high conversion rates over time, that belief is rationally reinforced.
  • Ultimately, the proof is in the pudding—with rational landing page optimization, you expect to sustain improved conversion rates across many pages in many programs over time. If you’re successful by that metric, does it matter if the weights of contributing factors are somewhat fuzzy?

People in general, and marketers in particular, are very good at intuitive pattern recognition—in ways that are, frankly, hard to capture in oversimplified mathematical models. To be sure, this sometimes leads us astray, but more often than not it gives us a competitive edge.

Rational landing page optimization helps develop that intuition within relevant contexts and segments. What you learn won’t always be perfect, but it will give you momentum that can be measured in the net results.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Scott Brinker
Contributor
Scott serves as the VP platform ecosystem at HubSpot. Previously, he was the co-founder and CTO of ion interactive.

Get the must-read newsletter for search marketers.