Pitfalls Of A/B Ad Testing, Part 3

Over the past two months in this column, I’ve discussed some of the pitfalls of A/B ad testing, and in this third and final installment, I’ll discuss a new PPC ad optimization model I’ve been working on and have lovingly entitled, the Van Wagner Ad Sets Optimization Model. The model is completely new and thoroughly […]

Chat with SearchBot

Over the past two months in this column, I’ve discussed some of the pitfalls of A/B ad testing, and in this third and final installment, I’ll discuss a new PPC ad optimization model I’ve been working on and have lovingly entitled, the Van Wagner Ad Sets Optimization Model.

The model is completely new and thoroughly untested, but from early feedback from readers and colleagues, there seems to be strong anecdotal support that this model can become an important asset in any PPC campaign manager’s toolkit. I’ll be presenting my Ad Sets Optimization Model more formally at SMX Advanced in Seattle next month, but will give you a general overview here today.

Before jumping into the Ad Sets Optimization Model, there’s some unfinished business from previous two columns, Pitfalls of A/B Split Testing, Part 1, and Part 2. Last month, I offered the incentive of a chance at a lobster dinner to readers who provided feedback and critique on this ad testing discussion. My grateful thanks to all who chimed in, and congratulations the SearchEngineLand reader whose handle is MMantlo. Please email me and I will arrange to have that lobster dinner delivered to your door, courtesy of Lobster.com.

Ad Sets optimization model

The Van Wagner Ad Sets Optimization Model is based on the premise that an ad group containing a set of well-performing ads can outperform an ad group that contains only the single best ad in that ad group.

Some readers and colleagues have confirmed that after completing rounds of A/B testing and settling on one champion ad, they’ve seen unexplained declines in ad group CTR, clicks, and conversions, even after addressing the most common cause of this phenomenon, keyword match types bringing in unfocused search traffic.

How can a set of ads in an ad group outperform the best ad in that ad group?

Or, as one colleague posed the question, “If none of the ads in the set outperforms the champion individually, how can the entire set? This strikes us as analogous to claiming that ten fast men together can be faster than the fastest man in the world.”

Yes, the idea is counterintuitive, but only if you are narrowly focus on the problem of finding the best performing ad, rather than optimizing your ad group. Instead of the fastest runner analogy, a better analogy may be the Tour de France bike race, where a peleton of ten good riders working together can always beat the fastest single rider.

So how can sub-optimal ads work together with the best ad to increase the yield of the ad group they belong to? The simple answer is that they enable the ad group to connect with a wider range of audience needs and desires than a single ad.

To demonstrate how this works, let’s take an example of an ad group with a single exact match keyword, “blue widgets,” and two finalist ads from our A/B testing.

Here are the ad headlines and performance metrics. Note that while I am mentioning clicks and click-through rates here, the analysis works the same way using conversions and conversion rates.

ad A: [Save 20% on Blue Widgets] 350 Clicks / 5000 Impressions = 7.0% CTR
ad B: [ECO Friendly Blue Widgets] 450 Clicks / 4000 Impressions = 11.3% CTR

The sampling is significant according to Verster and they say it’s pretty much a slam dunk—you can be 99% confident that ad B will continue to beat ad A.

But wait, before you declare a winner, think about the audience population your ads are tailored to. In this case, the two ads touch on very different consumer desires. Ad A is designed to appeal to bargain hunters, and ad B is meant to appeal to people interested in green lifestyles. These may be almost mutually exclusive audiences. To cheapskates, eco-friendly generally doesn’t mean cheap, and vice-versa for eco-consumers.

If these ads appeal to non-overlapping audience segments, what happens when you take ad A offline? You lose the entire audience for ad A, which in this case represents more than half of your target audience. That would be a very bad decision!

Instead of making that decision to bisect the audience, you should instead consider running the two ads as a set, and campaigns set to even ad rotation. The basic tenet of the Ad Sets Optimization model is that when more of your good ads are seen by more of your target audience, your ad group yield will improve, even if some ads are better than others.

The Ad Sets Optimization Model relies on two things. First, people search on a given search term more than once, anywhere from 2 to 20 times, on their way to a decision. Second, search engines will rotate ads in a way that earns them most revenue.

With two ads in your ad set and campaigns set to even rotation, you can use a coin-toss probability to describe the likelihood that a user will see both of your ads.

  • On 1 search, there’s a 50% chance they will see both ads.
  • With 2 searches, the probability rises to 75%.
  • After 3 searches, the probability becomes 87.5%.
  • After 4 searches, the probability reaches 93.75%.

As this simple probability table suggests, your ads are very likely to be seen by your two target audiences even after just a few searches, giving you a very good shot at getting the click and the conversion.

On the other hand, if you blindly follow A/B ad split testing to its logical conclusion and take ad A offline, you have a 0% chance of getting a click from your cost-conscious audience.

We don’t know exactly how the engines rotate ads, but I think it is reasonable to assume that a new searcher will probably see ad B first, because it has the higher CTR. The engine may even present the same ad on that user’s second query even if they did not click on it the first time. However, after showing a user ad B twice in a row without getting a click, I’d imagine the search engine would be much more inclined to show ad A to see if that will attract a click from this searcher. If that is indeed how it works, then the coin-flip probabilities shown above tilt even more towards both ads being seen during their search session.

The Van Wagner Ad Sets Optimization Model can be used in conjunction with existing A/B ad testing procedures to produce higher performing ad groups. To learn more about it, come to the Test That Ad! session at SMX Advanced in Seattle on Tuesday, June 8th.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Matt Van Wagner
Contributor
Matt Van Wagner is President and founder of Find Me Faster a search engine marketing firm based in Nashua, NH. He is also a member of the programming team for SMX events. Matt is a seasoned sales and marketing professional specializing in paid and local search engine marketing strategies for small and medium-sized companies in the United States and Canada. An award-winning speaker whose presentations are usually as entertaining as they are informative, Matt has been a popular speaker at SMX and other search conferences. He is a member of SEMNE (Search Engine Marketing New England), and SEMPO, the Search Engine Marketing Professionals Organization, He is member and contributing courseware developer for the SEMPO Institute. Matt occasionally writes on search engines and technology topics for IMedia, The NH Business Review and other publications, He also served as technical editor for Andrew Goodman's Winning Results with Google AdWords and Mona Elesseily's Yahoo! Search Marketing Handbook.

Get the must-read newsletter for search marketers.