Sign up for weekly recaps of the ever-changing search marketing landscape.
All Clicks Aren’t Created Equal: Q&A With Danny Sullivan
Earlier this month, in a blog post reporting our clients’ aggregate ad spend across the major search engines, we noted that Google’s slice of the pay-per-click pie increased across 2007. I wrote:
“We never sat down and had meetings about moving budgets. Rather, our systems noticed that ever-so-slightly better clicks could be had on Google and so shifted spend there. ‘Better’ in this context means “more likely to generate sales dollars or margin dollars for our clients.”
This shift from Yahoo to Google was imperceptible on the day-to-day scale. The search wars are fought one click at a time. One bid at a time. A penny here and a penny there. But looking back over ’07, the trend becomes clear: across our client base, Google won 5 points of share at Yahoo’s expense.
Danny Sullivan, illustrious Editor-In-Chief, emailed me some follow-up questions about how we buy clicks for clients (answer: carefully!), and how we allocate budget across the engines (answer: we don’t!).
At Danny’s suggestion, I’ll respond to his questions about that post more fully here. Danny’s words are presented in bold. I’ll also show an Excel analysis you can run on your own PPC data to explore your own efficiency vs. volume trade-off curve.
Danny Sullivan: OK, so I wanted to understand more. Did you have a set budget?
Most of our clients tell us to buy as many clicks as we can for them that meet their economic goals. Those goals could be a ROI target, an A/S goal, a CPO goal, a margin goal, whatever.
So, in general, no, we don’t have fixed dollar budgets. Instead, we have efficiency budgets. If the clicks are converting, we increase the spend.
That post was about our base of 100+ clients in aggregate. A small number of our clients do set hard dollar budgets, but they’re in the minority.
Digression: I recall one client, a well-known national store retailer, who in early December of 2005 reached their annual search advertising budget. They instructed us to turn off their ads and go dark, right before Christmas, right in the thick of their hottest selling of the year, when their ads were printing money, because they had hit the budget number they had set months earlier. Ouch! Happily, this retailer changed their instructions to us for ’06 and ’07, and now budget by efficiency, rather than by hard dollar amount.
Even when we manage to absolute dollar budgets, we get better results managing spend by managing bids and matchtypes. We often avoid the engines’ budgeting mechanisms. In their well-intentioned desire to serve advertisers, the engines’ ad-serving algorithms sometimes do dumb things to campaigns when advertisers set daily caps.
DS: Usually, people complain they can’t get enough traffic.
There’s ample traffic. What’s is scarce is good traffic. For each retailer, there’s a finite supply of clicks which meet their performance metrics.
When we optimize campaigns for new clients, we often see big improvements by adding terms, refining match types, and bidding rationally.
But at some point there’s a plateau.
That plateau is a function of how many qualified humans are searching for your goods and services, and it imposes a limit on how large your campaigns can profitably go. Some retailers don’t like to hear that, but it is true.
Busting through that plateau requires lowering your profitability goals, improving the conversion of your site, or changing your business fundamentals (merchandising breadth, pricing strategy, shipping rates, etc.) Bidding and terms and match types and copy can take you far, but only so far.
DS: You hinted at this [not enough traffic] with Microsoft.
Yes, we reported that in 2007 Microsoft clicks performed better for our clients, on average, than Google clicks.
However, for each Microsoft click we purchased for clients, we bought 13.6 clicks on Google(!). That’s a huge difference in scale, and this makes the Google vs. Yahoo performance numbers incommensurate.
Suppose there’s a private high school with a handful of students and a low student-to-teacher ratio, and they send 95% of their graduates to 4-year colleges. And let’s suppose down the block there’s a gigantic and diverse inner city high school with thousands of kids, and the big school sends 70% of their graduates to 4-year college. Even with a lower college matriculation rate, the public school is demonstrating far greater teaching excellence, because they’re achieving their results across a heterogeneous population and at a much larger scale.
That’s what’s impressive about Google: they’re delivering quality and quantity.
DS: So if Google is converting so much better, why are you spending at all with the other players?
Just because Google has on average better click quality than Yahoo doesn’t mean that there aren’t great clicks on Yahoo. There are. Most advertisers will find some great-performing phrases on Yahoo. Most advertisers will find some great-performing phrases on Microsoft too. And perhaps some on Ask. And maybe even some on TinyEngineNobodyHasHeardOf.com.
The issue is how many great clicks can you get.
There’s a bottom cut-off: if the inventory of good clicks on a particular engine isn’t above some minimum threshold, there’s a point at which that engine isn’t worth management attention or robot attention or tech integration costs or accounting hassle. And so advertisers just ignore that engine altogether. That is the barrier faced by TinyEngineNobodyHasHeardOf.com and NewStartupSearchEngine.com. And that’s one of the reasons Microsoft wants to buy Yahoo — as the #2 engine, Yahoo always gets considered. Sometimes, even at #3, Microsoft doesn’t.
DS: My assumption is that you can’t spend all you want on Google or you feel you need some visibility in these other places.
Yes, our clients would like to spend boatloads more on each engine, if they could do so profitably. Sadly we/they can’t, due to decreasing marginal returns on additional advertising.
For our clients buying search to drive profits — and such advertisers comprise the bulk of our client base — we don’t have any a priori need to spend any amount of money on any of the engines — we just buy what works.
To make this more concrete, here’s an Excel analysis anyone can do to explore the PPC quality versus quantity trade-off. I’ll do this analysis using data one of our clients, a well-known specialty retailer with a website, nationwide stores, and catalogs.
To date, we’ve tested 176,903 phrases for this client on Google. (More on Yahoo later.) Some of these phrases didn’t pan out, either because they had poor performance or they lacked impressions. Of the 176,903 phrases tested, currently 11% (20,152) are active and regularly generating good clicks on the engines.
If you’re scratching your head over a 89% drop-off between ‘tested phrases’ and ‘phrases regularly generating good quality clicks’, stay tuned for the second installment of this post next Monday, where I’ll talk about the long tail and the comprehensive term list approach using data from this same retailer. If you want to catch alotta fish, you gotta toss many hooks in the water.
OK, here’s that spreadsheet analysis for you to try on your data.
For each each engine, make a spreadsheet with these columns:
- Brand Phrase Flag
- Resulting Sales
- Resulting Orders
If it’s too many rows for a spreadsheet, move up to a database.
If you assign different tracking codes to distinguish different ad copy and destination URLs — that is, if your tracking is more granular than phrase, which is a very smart idea — then for this spreadsheet, roll up your performance data by phrase.
As for time period, take performance over the last month. The time period over which you evaluate ads, both in days and in clicks, matters a great deal for effective PPC bidding, and differs between head and tail terms. Let’s gloss over those details here and just take last month to keep it simple.
Now, add these derived columns to your spreadsheet:
- CPC (cost over clicks)
- SPC (sales over clicks), and
- A/S (adspend over sales).
For this example, we’ll use A/S as proxy for profit. This is a decent first-order approximation, and a pretty accurate if your various product categories have similar margins.
Sort this sheet by A/S ascending (primary) and sales descending (secondary). This orders your phrases from best to worst.
Now add in columns for cumulative sales and cumulative ad cost. By “cumulative sales and cost,” I mean the total sales and cost downto and including the current row on the spreadsheet. Here’s an example spreadsheet showing the formulas for computing cumulative sales and costs.
Compute cumulative A/S by dividing cumulative ad spend by cumulative sales.
Still with me? You should have a sheet that looks something like this:
I scrolled down a thousand rows in that screen image to reach below the handful-of-clicks-one-order ads, as these have essentially zero A/S. For example, the single best A/S phrase for this retailer during this period was a two word phrase, a manufacturer’s brand name with a SKU model number, which consumed 42 cents of ad cost, two clicks at 21c each, and generated one $2899 order. That’s a mind-blowing A/S of 0.017%. Sweet!
Now, plot downto ad spend against corresponding downto sales.
You’ll get a trade-off curve, or trade-off horizon, something like this.
Every point on this graph corresponds an active search phrase running on Google for this client. There’s 20,152 maroon points on this graph, so close together they blur into a line.
I marked three regions on this graph: “I”, under the curve; “E”, on the curve itself; and “U”, above the curve.
You’d never want to be in region “I”. “I” stands for “inefficient.”
Suppose this retailer had a portfolio of search phrases generating $800K in sales via $80K in ad spend. Not so good.
Done right, the graph shows you could generate $1.4m in sales from the same $80K in advertising. $1.4m is much better than $800k.
The curve “E” denotes the efficient trade-off curve. If you’re optimizing your portfolio reasonably well, the curve represents the best sales you can get for each level of ad spend. You can’t do better than being at some point along the “E” curve.
The region above the curve, labeled “U” for “unreachable,” is beyond the horizon if the curve is optimal.
So, which point along the “E” curve is best?
A trick question!
There’s no right answer for all retailers. It depends how advertiser values the top vs. bottom line, sales vs. profits.
You can compute the point on the curve that maximizes profits (see How Much Advertise and accompanying Excel model). But not all retailers seek to maximize their bottom line, and that’s not wrong. Many place a higher premium on top line, or new customer acquisition, or whatever.
Here’s that curve again, w/absolute ad spend replaced with A/S.
This graph highlights the trade-off between the retailer’s choice of advertising efficiency and resulting sales.
It always comes down to the same choice eventually: you can have a super-efficient small program, or a less efficient larger program.
OK, back to Danny’s question. Here’s the same graph again, showing phrases on both Google and Yahoo.
The 20,152 maroon points again represent Google. They haven’t changed from the prior graph. There are an additional 2,634 little blue circles, most clustered and overlapping on the left side of the figure. The little blue circles represent Yahoo ads, placed along the same trade-off continuum.
See how for this advertiser the blue circles thin out pretty rapidly as the curve heads right.
It isn’t that we didn’t try the same 176,903 phrases on Yahoo as on Google. We did.
It is that, compared to Google, many fewer of these 176,903 on Yahoo had both (a) search inventory and (b) sufficiently high performance. And sadly, many of the leftmost points are retailer brand phrases, which often aren’t incremental.
Back to Danny.
DS: But also, is traffic a factor? I mean if Google has more traffic, you’re getting more searches there, more clicks there, and that produces a spend change too, doesn’t it? Even if you don’t do anything?
Again, it isn’t just traffic; it is traffic that converts.
Of course, it’s also your systems that are making these moves.
Yep. You need solid technology to manage campaigns at this scale. Systems and algorithms do matter a great deal.
For example, it isn’t smart to “explore the trade-off spectrum” by bidding each ad up and down the page to determine its response frontier. That’s an inefficient and costly heuristic to explore the trade-off curve.
But is part of it due perhaps to Yahoo getting cheaper? I mean, they have an entire new system. If it’s costing less to be on Yahoo, that stinks for Yahoo, but it’s not necessarily a sign that Yahoo isn’t working well, is it?
We don’t see Yahoo getting “cheaper.”
“Cheap” or “costly” clicks are more than just low or high CPCs. Some phrases are overpriced at a penny; others are a fantastic bargain at ten bucks. You can’t just compare CPCs, you need to compare click quality (A/S) too.
I’d love to see you discuss these types of things. cheers, danny
Thanks for the opportunity to respond today!
Some opinions expressed in this article may be those of a guest author and not necessarily Search Engine Land. Staff authors are listed here.