• http://www.cpcsearch.com Terry Whalen

    In Wister’s experiment, does data integrity hold up, given that PPC managers or bid engines typically bid up keywords that are converting well, and bid down those that are not?

    In other words, since bids (and therefore ad position) are actively changed based on conversion rates (that is, bids are often modified based on metrics like CPA, ROAS, etc., and these in turn are tied quite directly to conversion rate), wouldn’t this mean that the data should be taken with a large grain of salt?

    I recently looked at conversion rates and ad position data for a client advertiser and found that conversion rates tended to go up as ad position increased – then I remembered that my own actions probably went a long way toward explaining the data, as I bid things up that are converting well, and bid things down that are converting poorly.

    My dive into this data was a reaction to a shiny deck that the google agency folks had sent along for this particular client, showing the positive correlation between higher ad position and higher conversion rates (as you might imagine, they included branded keywords in their data, which definitely helped in painting the picture they wanted me to see, heh)

  • http://www.findmefaster.com Matt Van Wagner

    Thank you for question/comment, Terry.

    As I understand it, Wister and his team focused in on position, and examined data where positions changed day over day. Given that, whatever caused the position change, (bids, competitor actions, changes in QS calculations, etc) should not be a factor in their study.

    The only question Marin attempted to answer was what happened to conversion rate when positions changed for any reason, and in aggregate, they found no correlation between ad position and conversion rate.

    I am hoping Wister or someone from his team can weigh in, here, too.

    It does strike me as a little surprising that your Google Agency team would use a deck that seemingly contradicts the very recent conclusions of their own chief economist – unless they have very well-conditioned data on a very specific niche case.

  • http://www.adwordsanswers.com davidrothwell

    Excellent article, thanks. I’m pleased to see a 3rd party experiment on Hal Varian’s counter-intuitive statement.

    On QS being static when moving keywords within an existing campaign, this is not my experience at all.

    I have a large client account (over 200,000 keywords) and am constantly finding Google suggests other ad groups where a keyword applies (Opportunities tab).

    When I include it using AdWords Editor, I frequently find that creating a duplicate keyword in a different ad group, with different keyword “neighbours” and a different keyword/ad text combination, can frequently raise or lower both the QS and minimum bid on the new keyword. It’s entirely arbitrary and unpredictable. And sometimes higher QS gives a *higher* bid (you would expect the opposite) and vice-versa!

    So now I just include new keywords wherever Google suggests them, have loads of duplicates, and find that Google will enter ad auctions based on my target CPA (using conversion optimizer) wherever my keyword lives – so I don’t really care…

    I’ve learned to be very pragmatic about keywords, ad groups and ad texts and just to enter as many ad auctions as possible with CPA managing bid prices and ad delivery for me.

    And even with QS=1, and Google reporting “this ad rarely shown due to low QS”, I still get hundreds of conversion rates at 100% in my keyword and ad inventories (216,000 and 92,000 respectively).

    Further info available on request…

  • http://www.rimmkaufman.com George Michie

    Matt, sounds like a great panel!

    Terry, we at RKG have studied and reported on this fact for years. We first presented our findings in 2006. In fact, I’m told Hal was prompted to publish his findings by an SEL post I wrote in 2008: http://searchengineland.com/why-position-bidding-wastes-money-14841

    Matt, there is a sharp divide in the Google community between the product team, who are interested exclusively in building a terrific product, and the sales team who are exclusively interested in getting advertisers to spend more money. Some folks on the sales team are more shameless than others, and many are straight shooters, but it is important to bear in mind that the folks who represent themselves as “account reps” to us are referred to as sales staff at Google.

  • http://www.findmefaster.com Matt Van Wagner

    Thank you for your observations, David.

    In summarizing Addie’s experiments, I didn’t emphasize that she isolated relevancy scoring by moving complete sets of {keyword|ad|URL}.

    If I understand your situation, you accept Google’s suggestions on keywords to drop into ad groups, and so these keywords get paired with different ads that have their own CTR and QS profiles. That being the case, one should certainly expect QS/min bid differences, and assuming the keywords are not as relevance-tuned to the ads, then it follows that the QS would likely be lower for the duplicated keyword.

    I would challenge any assumption of arbitrariness in the QS scoring, however. It may be mysterious black-box, but it is a rules-based black-box. I’d guess that match types and keyword-ad pairings are likely behind the inconsistencies you’ve mentioned.

    Your pragmatic approach and success with Google suggestions sounds really interesting, and sure to be of interest to other paid search managers. I would love to collaborate with you on an article for this column, if you’re interested. I’ll contact you offline…

  • http://www.findmefaster.com Matt Van Wagner

    George

    Thanks for weighing in, too. The panel went very well, and if you hadn’t had prior commitments, we would have loved to have you on it, too. Hope you can make the next one!

    I remember that 2006 post, and have always enjoy your colums and your well-supported arguments.

  • http://www.marinsoftware.com wister

    Terry’s intuition is correct – if you look at northern keywords vs southern keywords, the northern keywords will have a higher conversion rate – a *cause* of their “northernness” rather than a result. Also known as a cross-sectional analysis.

    Our study is longitudinal — meaning we looked at how a given keyword performed when it was in different positions. Then we rolled up those effects (change in conv rate vs. change in position) across lots of keywords.

  • http://www.findmefaster.com Matt Van Wagner

    Great. Thank you for clarifying your findings, Wister.

  • jczhang

    This is interesting, especially since I haven’t read or seen a lot of these larger volume experiments… which is no surprise because the more data you collect the more expensive these experiments become.

    The issue is that this article seems to generalize the conclusions of these experiments for the entire population of datasets. “Google” being the entire population here, can’t really be proved right or wrong simply using data that is not properly randomly sampled to represent the entire population. No information is given about how the data was sampled for the experiment conducted by Marin and the other two studies are based on data from only their own clients.

    Until agencies start sharing their metric information publicly, it’s going to be difficult to generalize any of these in-house experiments. Having said that, agencies that have the resources to do this kind of experimentation have quite an advantage.

  • http://www.cpcsearch.com Terry Whalen

    Wister, thanks for your reply. It makes total sense. I’d like to go back to my data and do with kw-by-kw, longitudinal style. At the end of the day, though, I’m still not sure that the problem will go away, since even for particular keywords, conversion rates may be going up and down based in large measure on things like availability, competition, etc., etc., and we’re still making bid changes based on conversion rate. I think other factors probably have greater causal effects than ad position has. I just think it’s a blurry area. Even doing it longitudinally – which seems like the best way to analyze this – I’m just not convinced that you don’t still have the same problem. Although if you tried to use a relatively short time period (and you still had enough statistically significant data), maybe these other factors become much less likely to be the cause of conversion rate changes, making the (neutral) correlation between ad position and conversion rate more valid(?)

    Having said that, I’ll try to pull some data kw-by-kw and then roll it up (will try to use exact-match keywords so we know that the search queries are matched up to the keyword).

    George, thank you, too, for weighing in here – now that I think about it, I know I’ve read one or two notes about this analysis from you guys through the years. For the record, I believe pretty strongly that for most scenarios conversion rates don’t change with ad position (that is, ad position does not have much or any effect on conversion rates).

    p.s. I generally really like the google agency folks, but yes, their focus at the end of the day is to bring in higher spends and get folks using new products, inventory, etc.

  • http://www.cpcsearch.com Terry Whalen

    Quick note on quality scores – from what I’ve seen, there can be some arbitrariness especially with keywords that have very little impression / click volumes. Because of that, we tend to have more duplicate keywords in our accounts these days – and then we look at them side-by-side and act based on QS, conversions and cost/conv. Sometimes we’re happy to pause the higher QS keyword in favor of the lower QS keyword that for whatever reason has more conversions at a lower cost/conv.

  • Andrew Goodman

    On one hand, the broad studies of ad position conversion rates — from Google/Varian, RKG, and Marin, for example — are useful in that they aggregate a lot of data and they smooth out bumps in the road so that we can assert general concepts. I like that there is an overall consensus that there are few differences in conversion rates across the board. This consensus replaces various flawed past studies.

    Basically, there are no mysterious, magical ad positions, and no glaring patterns we need to be aware of. Then again, anecdotally I can say that I’ll notice conversion rates improving in very high positions, for some accounts. That might depend on seasonality and other factors, of course.

    That being said, it’s important that people are aware of how to study this data easily for their own account, especially if they have some high volume terms to work with over a long period of time. Google Analytics offers a “keyword positions” report, so you don’t have to guess at goal conversion rate by position… you can run the data for your own account. It helps if you systematically vary ad positions somewhat (through bid changes) — otherwise you might not have enough distribution across various ad positions to get useful data.