PPC Mad Scientists Prove Google Right… And Wrong

At the SMX Advanced 2010 conference in Seattle, we invited four leading paid search experts to share some of their recent experiments in paid search in a new session entitled The Mad Scientists of Paid Search. The panelists were encouraged to add a bit of academic rigor to their presentations by including their hypotheses, design of the experiment, data, results and conclusions. The audience was invited to participate as peer reviewers, to challenge the assumptions, data integrity and conclusions of the scientists.

For those of you who were not able to make this year’s SMX Advanced, which sold out weeks in advance, I’d like to highlight the experiments presented by three of our PPC mad scientists, Wister Walcott, Addie Conner and Lulu Gephardt. Our fourth panelist, Dr. Siddarth Shah of Efficient Frontier, did not present an experiment, but instead led us through an engaging, professorial dissertation on portfolio management.

Google proven right

One of the great ongoing debates within the PPC community is whether conversion rate varies by ad position. In most paid search auctions, the higher your position (further “north”), the more clicks you will get. But what about the conversion rate?

Marketing lore holds that visitors with purchase intent will read all the ads more closely, but that “rubberneckers” will be more likely to click on just the ads at the top. If true, this would result in a better conversion rate as ads move down the auction. The thoughtful marketer could leverage this information to their advantage when maximizing profit.

Wister Walcott, co-founder and VP of Products at Marin Software, working with his team of in-house PhDs, designed an experiment that looked at the relationship between ad position and conversion rate. In an experiment that incorporated several hundred thousand data samples, Walcott isolated high-volume keywords that changed position from one day to the next, adjusted for day-of-week differences, and evaluated the change in conversion rates. As control, they first confirmed that conversion rates were stable when position stayed the same (it was), and then looked for a change in conversion rate when ad position changed.

Their findings? There was no significant correlation between ad position and conversion rate in eleven of twelve data sets tested. In just one data set, a lead generation site, the PPC scientists at Marin observed conversion rate declining when position went south. Perhaps more cautious clickers were less likely to fill out a lead form?

The Marin results supported the same findings published by Google Chief Economist, Hal Varian and his team last year (see his Inside AdWords post for more details). Wister cautioned that their study covered high-volume terms only, and since they found one exception, there are most likely others. If you’d like to conduct a similar test for yourself, Wister will send you the model they used in their study—ping him at info@marinsoftware.com

Google proven wrong

Addie Conner, of Avenue 100 Media Solutions, developed some experiments to help understand the quality score implications of consolidating and reorganizing accounts. The experiment flowed out of her need to reorganize over 100 different AdWords accounts for a single URL into a new account or a new set of accounts.

Addie wanted to know whether it is always true, as Google suggests, that AdWords maintains the history of domains, keyword and ads. If this were true, Addie hypothesized, then there should be little or no impact on Quality Score, clicks, impressions and CTR when reorganizing within the same account, or from multiple accounts into a different existing account—or even into an entirely new account.

Addie’s experiments were designed to answer these questions related to multiple account reorganization:

  • What happens when you move keywords and ads within the same account
  • What happens when you move keywords and ads to a different account (has history)
  • What happens when you move keywords and ads to a new account (no history)

In the first experiment, her research showed virtually no difference in Quality Score when moving keywords and ads within the same account. Her other two experiments, however, demonstrated that Quality Score suffered significantly when ads and keywords were moved to a new or existing account.

Addie’s conclusion was that while it may be true that Google will maintain Quality Score history when moving from a single account to another single account, it is definitely not true that when consolidating and reorganizing multiple accounts into new or existing accounts.

Google display network experiment

Lulu Gephardt, paid search Manager for REI, designed a very practical experiment to see if it was possible for REI to wring more return on ad spend in the Google display network, (formerly known as the content network) without cannibalizing their well-established affiliate network. Her test was simple and worth repeating for other retailers.

Instead of avoiding affiliate sites with their ads, REI directly targeted their affiliate partner sites which were running AdSense with CPC-based text and banner ads. The results were pleasantly surprising. Instead of cannibalizing their own network, REI increased the effectiveness of both their own display advertising that of their affiliates. Lulu’s presentation, The One-Two Punch was as entertaining as it was informative, and worth reviewing.

Calling all PPC mad scientists

I suppose that to some degree, all of us in paid search are mad scientists. While many of our experiments are practically-oriented and primarily designed to improve campaign performance directly, sometimes we create tests because we are insatiably curious to find out all that we can about how paid search really works.

If you have some interesting experiments you’ve tried that you’d like to share with the world, let me know, and I’d be glad to share the limelight with you and your mad science in one of my upcoming columns. Maybe you’d even like to make a speaking pitch to present your research at one of the upcoming conferences, such as SMX East in October. If so, I would love to hear from you—drop me an email or leave a comment below.

Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.

Related Topics: Channel: SEM | Paid Search Column

Sponsored


About The Author: is President and founder of Find Me Faster a search engine marketing firm based in Nashua, NH. He is a member of SEMNE (Search Engine Marketing New England), and SEMPO, the Search Engine Marketing Professionals Organization as a member and contributing courseware developer for the SEMPO Institute. Matt writes occasionally on internet, search engines and technology topics for IMedia, The NH Business Review and other publications.

Connect with the author via: Email | Twitter | LinkedIn



SearchCap:

Get all the top search stories emailed daily!  

Share

Other ways to share:

Read before commenting! We welcome constructive comments and allow any that meet our common sense criteria. This means being respectful and polite to others. It means providing helpful information that contributes to a story or discussion. It means leaving links only that substantially add further to a discussion. Comments using foul language, being disrespectful to others or otherwise violating what we believe are common sense standards of discussion will be deleted. Comments may also be removed if they are posted from anonymous accounts. You can read more about our comments policy here.
  • http://www.cpcsearch.com Terry Whalen

    In Wister’s experiment, does data integrity hold up, given that PPC managers or bid engines typically bid up keywords that are converting well, and bid down those that are not?

    In other words, since bids (and therefore ad position) are actively changed based on conversion rates (that is, bids are often modified based on metrics like CPA, ROAS, etc., and these in turn are tied quite directly to conversion rate), wouldn’t this mean that the data should be taken with a large grain of salt?

    I recently looked at conversion rates and ad position data for a client advertiser and found that conversion rates tended to go up as ad position increased – then I remembered that my own actions probably went a long way toward explaining the data, as I bid things up that are converting well, and bid things down that are converting poorly.

    My dive into this data was a reaction to a shiny deck that the google agency folks had sent along for this particular client, showing the positive correlation between higher ad position and higher conversion rates (as you might imagine, they included branded keywords in their data, which definitely helped in painting the picture they wanted me to see, heh)

  • http://www.findmefaster.com Matt Van Wagner

    Thank you for question/comment, Terry.

    As I understand it, Wister and his team focused in on position, and examined data where positions changed day over day. Given that, whatever caused the position change, (bids, competitor actions, changes in QS calculations, etc) should not be a factor in their study.

    The only question Marin attempted to answer was what happened to conversion rate when positions changed for any reason, and in aggregate, they found no correlation between ad position and conversion rate.

    I am hoping Wister or someone from his team can weigh in, here, too.

    It does strike me as a little surprising that your Google Agency team would use a deck that seemingly contradicts the very recent conclusions of their own chief economist – unless they have very well-conditioned data on a very specific niche case.

  • http://www.adwordsanswers.com davidrothwell

    Excellent article, thanks. I’m pleased to see a 3rd party experiment on Hal Varian’s counter-intuitive statement.

    On QS being static when moving keywords within an existing campaign, this is not my experience at all.

    I have a large client account (over 200,000 keywords) and am constantly finding Google suggests other ad groups where a keyword applies (Opportunities tab).

    When I include it using AdWords Editor, I frequently find that creating a duplicate keyword in a different ad group, with different keyword “neighbours” and a different keyword/ad text combination, can frequently raise or lower both the QS and minimum bid on the new keyword. It’s entirely arbitrary and unpredictable. And sometimes higher QS gives a *higher* bid (you would expect the opposite) and vice-versa!

    So now I just include new keywords wherever Google suggests them, have loads of duplicates, and find that Google will enter ad auctions based on my target CPA (using conversion optimizer) wherever my keyword lives – so I don’t really care…

    I’ve learned to be very pragmatic about keywords, ad groups and ad texts and just to enter as many ad auctions as possible with CPA managing bid prices and ad delivery for me.

    And even with QS=1, and Google reporting “this ad rarely shown due to low QS”, I still get hundreds of conversion rates at 100% in my keyword and ad inventories (216,000 and 92,000 respectively).

    Further info available on request…

  • http://www.rimmkaufman.com George Michie

    Matt, sounds like a great panel!

    Terry, we at RKG have studied and reported on this fact for years. We first presented our findings in 2006. In fact, I’m told Hal was prompted to publish his findings by an SEL post I wrote in 2008: http://searchengineland.com/why-position-bidding-wastes-money-14841

    Matt, there is a sharp divide in the Google community between the product team, who are interested exclusively in building a terrific product, and the sales team who are exclusively interested in getting advertisers to spend more money. Some folks on the sales team are more shameless than others, and many are straight shooters, but it is important to bear in mind that the folks who represent themselves as “account reps” to us are referred to as sales staff at Google.

  • http://www.findmefaster.com Matt Van Wagner

    Thank you for your observations, David.

    In summarizing Addie’s experiments, I didn’t emphasize that she isolated relevancy scoring by moving complete sets of {keyword|ad|URL}.

    If I understand your situation, you accept Google’s suggestions on keywords to drop into ad groups, and so these keywords get paired with different ads that have their own CTR and QS profiles. That being the case, one should certainly expect QS/min bid differences, and assuming the keywords are not as relevance-tuned to the ads, then it follows that the QS would likely be lower for the duplicated keyword.

    I would challenge any assumption of arbitrariness in the QS scoring, however. It may be mysterious black-box, but it is a rules-based black-box. I’d guess that match types and keyword-ad pairings are likely behind the inconsistencies you’ve mentioned.

    Your pragmatic approach and success with Google suggestions sounds really interesting, and sure to be of interest to other paid search managers. I would love to collaborate with you on an article for this column, if you’re interested. I’ll contact you offline…

  • http://www.findmefaster.com Matt Van Wagner

    George

    Thanks for weighing in, too. The panel went very well, and if you hadn’t had prior commitments, we would have loved to have you on it, too. Hope you can make the next one!

    I remember that 2006 post, and have always enjoy your colums and your well-supported arguments.

  • http://www.marinsoftware.com wister

    Terry’s intuition is correct – if you look at northern keywords vs southern keywords, the northern keywords will have a higher conversion rate – a *cause* of their “northernness” rather than a result. Also known as a cross-sectional analysis.

    Our study is longitudinal — meaning we looked at how a given keyword performed when it was in different positions. Then we rolled up those effects (change in conv rate vs. change in position) across lots of keywords.

  • http://www.findmefaster.com Matt Van Wagner

    Great. Thank you for clarifying your findings, Wister.

  • jczhang

    This is interesting, especially since I haven’t read or seen a lot of these larger volume experiments… which is no surprise because the more data you collect the more expensive these experiments become.

    The issue is that this article seems to generalize the conclusions of these experiments for the entire population of datasets. “Google” being the entire population here, can’t really be proved right or wrong simply using data that is not properly randomly sampled to represent the entire population. No information is given about how the data was sampled for the experiment conducted by Marin and the other two studies are based on data from only their own clients.

    Until agencies start sharing their metric information publicly, it’s going to be difficult to generalize any of these in-house experiments. Having said that, agencies that have the resources to do this kind of experimentation have quite an advantage.

  • http://www.cpcsearch.com Terry Whalen

    Wister, thanks for your reply. It makes total sense. I’d like to go back to my data and do with kw-by-kw, longitudinal style. At the end of the day, though, I’m still not sure that the problem will go away, since even for particular keywords, conversion rates may be going up and down based in large measure on things like availability, competition, etc., etc., and we’re still making bid changes based on conversion rate. I think other factors probably have greater causal effects than ad position has. I just think it’s a blurry area. Even doing it longitudinally – which seems like the best way to analyze this – I’m just not convinced that you don’t still have the same problem. Although if you tried to use a relatively short time period (and you still had enough statistically significant data), maybe these other factors become much less likely to be the cause of conversion rate changes, making the (neutral) correlation between ad position and conversion rate more valid(?)

    Having said that, I’ll try to pull some data kw-by-kw and then roll it up (will try to use exact-match keywords so we know that the search queries are matched up to the keyword).

    George, thank you, too, for weighing in here – now that I think about it, I know I’ve read one or two notes about this analysis from you guys through the years. For the record, I believe pretty strongly that for most scenarios conversion rates don’t change with ad position (that is, ad position does not have much or any effect on conversion rates).

    p.s. I generally really like the google agency folks, but yes, their focus at the end of the day is to bring in higher spends and get folks using new products, inventory, etc.

  • http://www.cpcsearch.com Terry Whalen

    Quick note on quality scores – from what I’ve seen, there can be some arbitrariness especially with keywords that have very little impression / click volumes. Because of that, we tend to have more duplicate keywords in our accounts these days – and then we look at them side-by-side and act based on QS, conversions and cost/conv. Sometimes we’re happy to pause the higher QS keyword in favor of the lower QS keyword that for whatever reason has more conversions at a lower cost/conv.

  • Andrew Goodman

    On one hand, the broad studies of ad position conversion rates — from Google/Varian, RKG, and Marin, for example — are useful in that they aggregate a lot of data and they smooth out bumps in the road so that we can assert general concepts. I like that there is an overall consensus that there are few differences in conversion rates across the board. This consensus replaces various flawed past studies.

    Basically, there are no mysterious, magical ad positions, and no glaring patterns we need to be aware of. Then again, anecdotally I can say that I’ll notice conversion rates improving in very high positions, for some accounts. That might depend on seasonality and other factors, of course.

    That being said, it’s important that people are aware of how to study this data easily for their own account, especially if they have some high volume terms to work with over a long period of time. Google Analytics offers a “keyword positions” report, so you don’t have to guess at goal conversion rate by position… you can run the data for your own account. It helps if you systematically vary ad positions somewhat (through bid changes) — otherwise you might not have enough distribution across various ad positions to get useful data.

 

Get Our News, Everywhere!

Daily Email:

Follow Search Engine Land on Twitter @sengineland Like Search Engine Land on Facebook Follow Search Engine Land on Google+ Get the Search Engine Land Feed Connect with Search Engine Land on LinkedIn Check out our Tumblr! See us on Pinterest

 
 

Click to watch SMX conference video

Join us at one of our SMX or MarTech events:

United States

Europe

Australia & China

Learn more about: SMX | MarTech


Free Daily Search News Recap!

SearchCap is a once-per-day newsletter update - sign up below and get the news delivered to you!

 


 

Search Engine Land Periodic Table of SEO Success Factors

Get Your Copy
Read The Full SEO Guide