It’s remarkable that in 2008 there are still many bidding systems in use by SEMs and in-house PPC managers dedicated to “finding the right position” for each keyword. These position crawling systems guarantee inefficiency and lost opportunity; to put it concisely: they’re playing the wrong game. Here’s why:
- The value of traffic doesn’t vary by position. Careful study on our part, confirmed by University Statistics researchers, has proved that the conversion rates (orders per click), average order sizes, and margin percentages do not vary by position on the page. In other words, the people who click on ads at the top of the page behave the same way on their visit as the folks who click on the same ad in the middle of the page or at the bottom of the page. The quantity of traffic is much greater at the top, but the quality is almost exactly the same. In fact, the quality in position 1 tends to be slightly lower than position 2, and the quality improves slightly as the ads get lower on the page — these are small effects that can be ignored for practical purposes.
- Value of traffic times the percentage of value the advertiser can afford to spend on marketing = the bid. Maximizing the top-line within some efficiency constraint — what we’re typically asked to do — is “simply” a matter of measuring the value of traffic on each ad and bidding according to the above formula. That will place the advertiser’s ad as high on the page as they can afford to be, capturing the most traffic for each ad within their efficiency needs. If that bid places an ad in position 1 — that’s great, position 6? Okay, position 15? Oh well. The position is what it is, and is determined by what your competitors choose to do at any moment.
How do position crawlers work?
Largely, trial and error through the following steps:
- Test ads in different positions on the page.
- Measure the efficiency (cost to sales ratio or whatever) at each position.
- Set crawler to maintain the highest position where the efficiency worked.
At first glance, this may look like the same process. It isn’t. The critical mistake is the flawed assumption that the position produces the efficiency, when in fact the position is a coincidence of the value of traffic and cost of a particular position lining up at a given time. The position crawler determines that a position, say position 6, is “magic” for this term, when in fact the ever changing bid landscape will mean that position 6 either costs too much or is too far down the page much of the time.
Let’s look at some graphs.
On this ad, let’s say it’s “Foo Bar” on Adwords, the exact match version, we know that the sales dollars per click is basically $3. Let’s say the advertiser can afford to spend 33% of revenue on marketing, so by economic rationale, we’d say bid $1 on this ad. On average, that puts us in position 6 as it turns out.
The position crawler will get to this same place eventually, trying different positions until it learns that “position 6″ is its happy place. But that’s the problem, it learned the wrong thing. Position 6 is irrelevant.
At any given moment, the bidding landscape will not look like this average. Instead, it might look like this:
Perhaps several competitors got directives from their corner offices to “Be more aggressive”. We’d say, well, we can still only afford to spend $1 for traffic, so we’re going to get less traffic, but we’re not going to overspend.
Position crawler will say: “Gadzooks, I’ve fallen out of position!” and will start merrily climbing his way back to position 6, even though it means wasting money in the current environment.
On the other hand, maybe the actual landscape looks like this:
In this case, the wise system would say: traffic is worth $3, I can afford $1 — lookie there! We’re on top of the page, reaping the HUGE benefits of the higher CTR and Impression counts, and it’s cost effective — Yipee!
The position crawler will instead say: “Egads, I’m in position 1, I need to crawl back to my happy place of position 6!” And so, even though they can get the tremendous extra traffic associated with the top spot cost effectively, he’ll waste this opportunity by crawling back ‘home’.
The position crawlers are built with all kinds of cool features to “jam” competitors and “take advantage of holes in the landscape,” but all these complexities don’t change the fact that they’re playing the wrong game.
Bidding based on the value of traffic is simple conceptually, but complex in practice. The value of traffic is difficult to measure for low traffic “tail” terms, requiring smart stats and tiered clustering mechanisms. Moreover, while the value of traffic doesn’t change based on the position of the ad, it does change based on the time of day and day of week, the season, the match-type, the syndication network, special promotions, etc, so the calculations must factor in all those effects to do this well. This is a difficult game, but it’s the right game, and the results speak for themselves.
The next time someone tries to engage you in a discussion about “finding the right position” for a particular term, remind them that the value of the traffic is measurable; but the cost of a position is unknown, and unknowable, changing based on the whims of your competition. Bottom line: don’t let your competitors run your search program.
George Michie is Principal, Search Marketing for the Rimm-Kaufman Group, a direct marketing services and consulting firm founded in 2003. He regularly writes for the Paid Search column here on Search Engine Land.
Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.