• http://www.rimmkaufman.com George Michie

    Ben, I love ya’ man, but the “holy grail” you’re chasing is a Dixie cup. “Finding the right position” is searching for a unicorn; something that never has and never will exist. Indeed position targeting is guaranteed to both waste money and miss opportunities.

    Perhaps for branding initiatives, where “involvement metrics” are the goal rather than demonstrable ROI, this might make sense, but if you’re looking for ROI bid based on the anticipated value of the traffic for each ad, not the position on the page. Vagaries of the bidding landscape make position bidders hostage to what their competitors are bidding.

  • http://www.periscopix.com allydent

    I’m not sure if that assertion would stand up to scrutiny. It’s counter-intuitive to say that position has no effect on conversion rates, and common sense argues against it.

    So to say that it’s true you’d need some pretty good data to back it up. Unfortunately, I don’t think the research you linked to has that.

    The article you linked to stated about the data set used:

    “We obtained a data set comprised of paid search impressions, clicks, and orders for an online specialty retailer selling automotive parts and accessories.”

    “A data set”, and “an online retailer”. One. And a speciality one at that, targeting a specific demographic. Finding that conversion rates don’t vary by position for one advertiser hasn’t taught us anything, except that the bidding strategy for that advertiser isn’t dependent on position beyond maximising traffic.

    But you can’t extend that to everybody. Google did similar research recently, where they used a large range of advertisers across a large range of industry sectors. So far so good. Then they aggregated that data to find trends! Aggregated data from across the entire PPC marketplace? Of course they found no trends, they averaged them all out!

    The reality is more complicated than that though. Different types of web users behave differently, and the pressures of that behaviour on PPC do vary across markets.

    PPC advertisers target specific market demographics. It’s how we ensure that we only look for the users who are easiest to convert, and make the most use of our budget. But that means we can expect a large portion of people searching on our keywords to behave in similar ways. So we do see trends that vary from market to market.

    Some PPC campaigns will see conversion rates the same across ad positions as the advertiser linked to in your article. Others will see conversion rates improve in the lower positions (markets where the more serious customers are fussy and the advertiser is very competitively priced), and some will see conversion rates improve in the top position (where the reputation or feel of a site is more important to a customer’s conversion, and they feel that a company higher up Google is more trustworthy).

    Frankly there are many reasons why users behave in different ways, but to say that there is not more pressure one way or another in specific markets is ignoring the entire principle of targeting a specific demographic, and ignoring the reality of the data; either by choosing datasets so large they even out all differences or by choosing datasets so small they can prove whatever you want.

    I would be careful in honesty of saying that research has “proved that the conversion rates […] do not vary by position on the page.” Proved is a strong word, and I don’t think you’ve filled the necessary requirements to use it in the case of this study.

    If the proposition (conversion rates don’t vary by position) is true for an advertiser, then you’re absolutely right that maximising the traffic by balancing click-through rates and impressions against cost per click given your budget limits is dead on (assuming your keyword selection is good). But it’s still valid to claim that certain positions do better for certain advertisers, and doing research like the post above can only help the advertiser to know that.

  • http://www.periscopix.co.uk Ben Gott

    George, thanks for the comment. Always nice to see someone is paying attention!

    I’m afraid I’m going to have to respectfully disagree with your point of view however. You see, I’ve been to a land where pay per click campaigns run free to frolic with unicorns and leprechauns and fairies. There wasn’t a plastic cup in sight!

    Each and every one of our campaigns has its own aims and traits and of course targets a unique demographic. I’ve seen time and time again that ad position can play a part in this, both in terms of conversion volume and relative cost of acquiring those conversions. By that measure, targeting a particular ad position is absolutely valid.

    The post wasn’t called ‘Aim for Position 3 to Maximise ROI’ precisely because there isn’t a universal truth to be had in this land. Not in any area. Everything we do is tailored to the respective market and the business. And ‘test’ is always the word of the day. Wherever possible we should all try to stay away from broad assertions about user behaviour and the assumption that what works on one campaign/website will work on another.

  • blam

    I think if you were to switch the order of the dimensions, the report would be more effective (esp. when drilling down). First going into Campaign, Ad groups, Keywords, and looking at the ad position of specific keywords at certain position. As you currently have it, you are looking at a position’s conversion rate over all your keywords/campaigns, so I’d agree with George.

    Your last paragraph is one that should be taken note of – the concept of thoroughness … this report serves as a good starting point for analysis.

  • http://www.rimmkaufman.com George Michie


    That article links to one study we did long ago. We’ve studied this over and over with hundreds of different sites, and the answer keeps coming up the same. We haven’t published the more comprehensive studies.

    Incredibly difficult analysis, requiring mountains of data to disentangle the impact of your bidding behavior on the data. Obviously, the ones at the top of the page convert better — you’re bidding them up because they work. Obviously the ones at the bottom of the page convert less well…because you bid them down.

    To do the study correctly you need to account for that bias. Hard to do, and we’ve taken many different statistical approaches to getting at it. We keep getting the same answer, suggesting that the post click behavior if FAR more determined by the user’s search than where the ad was on the page that they clicked.

  • Stupidscript

    We’ve been studying this, as well, and I’d like to toss out a couple of our observations to see if anything is recognizable …

    1) Brand recognition plays the biggest role in determining the conversion rate of PPC clicks, regardless of ad position

    If the brand is well-recognized, there is a LOT more flexibility in which position drives clicks. A well-known brand could be well down the list and still be effective, whereas a lesser-known brand needs to be positioned higher in the results to do as well. Many searchers will visit organic listings for one company several times (“research”) before they finally click a PPC ad and convert. When we see that behavior, it almost doesn’t matter in which position the PPC ad is … the organics and the “research” visits have already done the heavy lifting. Which leads us to …

    2) A well-branded PPC ad in combination with good organic positioning can do a lot more work than a well-positioned PPC ad, alone

    Even for lesser-known brands, having both an organic and a paid presence in the results dramatically increases click through and conversions, and positioning is less of a factor when organic listings are also visible. We attribute this to a “trust” variable, which is enhanced by multiple instances of the brand on the page. In that case (both organic and paid appearing), the “trust” factor increases to the point where the PPC ad can be almost anywhere and still get the click.

    3) Like banner ads before them, the top positions are often “invisible”, or viewed as being too aggressive, to be as successful as positions 4 through 10

    We believe that when a search is performed, and the results listed, the top PPC positions are almost always dismissed, as the searcher moves down to the “meat” of the results. This seems to hold right up until the decision to purchase has been made, at which point the searcher will not be looking for the search results so much as they will be looking to connect with the “winner” of their research. At that point, the brand is the important factor, and it simply needs to be available, in any position.

    The relationship between the life of the search process and the ad position is almost non-existent. The search process always takes the form “research, decide, purchase”, and the position of the “purchase” link seems to be irrelevant, as long as the “research” and “decide” portions of the process have been satisfied.

  • http://www.periscopix.com allydent


    The bit I have a problem with is “Obviously the ones at the top of the page work better, you’re bidding them up because they work.”

    Unfortunately, this isn’t always true. It may be true in the main, but it simply isn’t always true. There are plenty of occasions when actually the same ad can perform better in lower positions.

    I agree that there will always be a bias as a result of your bidding strategies, but only when you’re looking at existing data and trying to find the account for the relationships. Sometimes you can actually experiment.

    So let’s say I have one ad on one keyword and I know how they are performing in position 4 on the right hand side, and I bid to move into the banner at the top. Same ad, same keyword, often completely different behaviour. This is not subject to the same biases you’re describing when you perform this kind of test.

    Now obviously that example is one ad in one market, but the point I’m getting at is that the result will be different every time you run that kind of test. You can do as much data mining as you like, and work as hard as possible to account for bias, but nothing will give you results as good as a controlled test. And no, we can’t control for everything, but we can perform enough tests to let the large numbers average that out.

    We have to always stick to the principle of hypothesis, test, repeat with this kind of data – because we can. This isn’t macroeconomics where we have to use existing data, we can control it.

    So saying “position has no effect on conversion rate” is just as bad as saying “position *always* has the same effect on conversion rate”. Neither is true. Your campaigns may see movement one way, they may see movement the other way, or they may see none at all. But until you look at the specific data for a specific campaign, you won’t know. Trusting research compiled without testing, that claims to account for your bidding strategies is missing the point: it’s assuming beforehand that you always bid high converting ad/keyword combinations into higher positions. Any campaign manager worth their salt knows that’s not always the best strategy, because sometimes moving an ad up the page makes it perform less well.

  • http://www.rimmkaufman.com George Michie

    Ally, I will say that our data is almost entirely retail, and I could believe that retail shoppers behave differently than in other verticals. I could particularly believe that arbitrage sites might find positional dependence in conversion rates.

    When I say “higher” and “lower” I don’t mean that’s our strategy, just how things tend to work out. We bid to the value of the traffic. Depending on the competitive landscape that could mean higher lower or page 6.

    When you say performance, I worry you’re adding sunk costs into the calculation. The metric in retail is sales/click or margin/click. We find it to be position invariant. More traffic at the top, but higher CPC and lower ROI.

    If our method was wrong, I don’t think we’d be able to hit efficiency targets as well as we do, regardless of changes to them; provided high volumes of data, we can hit a changed target the next day. If position played a statistically meaningful role, we wouldn’t see that.

    And I have a hard time believing your one ad test methodology shows statistically meaningful results.

  • http://www.periscopix.co.uk Ben Gott

    Blam, good observation – that would be a useful change to the report. As it stands, by pivoting the data it is possible to do what you describe (I think) without changing the report. In the example I chose to use ‘campaign’ as the dimension to pivot, you could also choose ad group or keyword.

    Stupidscript, its interesting that you have seen an improvement in performance when both paid and natural listings are visible. I’ve heard this claimed before but often by parties with a vested interest, perhaps we should research this in a bit more depth.

    I’m not sure about your finding that the top positions have a low impact on the decision making process. It certainly isn’t backed up by what we’ve seen in all cases.

    Anecdotatly, we often talk to potential clients who claim not to click on ads because they are untrustworthy as they are paid for (and therefore manipulated). The same people then get all red faced when you point out that the banner positions on the google SERP are sponsored links! They’ve been clicking on them the whole time, thinking they were the ‘untainted’ organic listings.

  • blam

    Hi Ben! I just tried it out, and it works great when you pivot with keyword as the dimension (with the set-up you had in the your post). As I was going through the report the first time, I guess I was in the mindset of the pre-pivot table days haha. Thanks again for a great report

  • http://www.periscopix.com allydent


    My one ad test was an example, to (albeit inarticulately) show the dramatically improved value of a test to that of data analysis.

    I agree that margin per click is crucial for retail clients (in fact, any client although retail is the easiest to measure), but I can’t agree that it’s always position invariant.

    For instance: client A sells cut-to-size plastic sheeting. They see conversion rates improve when the same ad on the same keyword is shown lower down page one. Client B wants sign-ups to a trading platform. They see conversion rates improve when the same ad on the same keyword is shown up at the top, in the banner.

    We can’t know for sure exactly why. That’s impossible. We have several theories about user behaviour and the differences between the two markets (e.g. market A has a lot of “curiosity” searchers who are not potential customers but are likely to click the top links. Market B has a high reliance on credibility and links in the big yellow bar above the SERPs carry a greater credibility in searcher’s minds. Note these are just example theories).

    But we see this same behaviour time and time again. It’s rarely true across an entire campaign – we might see an ad group the behaves one way and an ad group that behaves another. But only by testing that ad can we actually know.

    I don’t think that your method being incorrect would mean you can’t hit efficiency targets. On average I think you’re right. But that’s the entire flaw with your approach. It’s the average, not the truth for every campaign in every market. Regression analysis (with various methods of removing statistical bias) won’t show you that unless you understand the reasons behind every residual. Especially since that bias you’re trying to remove isn’t universally true.

    The result is that if you had an approach that followed your method, but was customised to the needs of that campaign you’d probably see other results in some cases. Which we do. And it works.

  • http://www.rimmkaufman.com George Michie


    I’m sure your clients are well-served. I’m also sure that ours are. This is a complex game and anyone who claims to have solved it is either a liar or a fool. Our system supports position targeting because clients often demand it. Every time they force us down that path we find their program suffers, and that when we follow our methodology they generate more sales at the same efficiency. The clients who’ve come on board die-hard believers in position targeting end up convinced of its folly in the end.

    Perhaps you’ve found some ninja magic that has eluded us. If so, congrats. Perhaps you’re chasing statistical noise up and down the page. As we continue to learn maybe we’ll find the truth lies somewhere in between.

    Best of luck

  • http://www.periscopix.com allydent


    I agree. There is no one methodology that works best for everyone. I think both ways have their place.

    But I don’t think the solution lies in between. I think the solution lies in using both. And other methods that we haven’t talked about here. And probably methods that nobody is using yet.

    As it all filters through we’ll see that the best, most successful managers are those who know when to use each, and how much weighting to give them.

    Exciting times are ahead.