A Rational Plea For Paid Search Syndication Controls

Traffic value varies widely not just by the words users type into the search box, but whose search box they use. Syndication network partners vary in quality and giving advertisers control over how much they pay for traffic from each would be a win for the advertisers and the engines.

Chat with SearchBot

A few years ago, I wrote a blog post referring to the syndication networks as “sin-dication“. I’m trying to be nicer these days as a tribute to Alan Rimm-Kauffman.

Fact: The quality of traffic coming from a search engine itself is of significantly higher quality than the traffic coming from its syndicate networks.

This has two important consequences:

  1. Bidding the same for traffic on the Google syndication networks as on Google.com leaves opportunity on the table.
  2. By not allowing advertisers to differentiate bids based on the quality of traffic driven by each network partner, Yahoo and Google lose money as do the advertisers.

Theory

Let’s take some basic paid search principles and show how they apply to the syndicate partners. I’m going to make some simplifying assumptions to prevent us from getting mired in detail and missing the bigger picture.

The value of traffic in paid search varies terrifically by the user’s search and the ad it fires. Since we can’t really bid based on user search, let’s see what that looks like as a function of keyword:

TrafficValue

Cool! Now let’s say the advertiser is willing to put 80% of the true value of the paid search income back into the marketing to drive the top line but still make some immediate profit.

Ideally, the bids would look like this:

IdealBids

By matching the bids to the value of the traffic on each keyword we generate as much traffic as we can afford from each keyword, thereby maximizing revenue within our efficiency requirements.

There is, however, another way to achieve the efficiency objective. Namely, by bidding to the average value of the collection of keywords.

LousyBids

In this simplest of models, assuming that the underbidding and overbidding don’t move the average value of the traffic—a big assumption and probably never the case—you still end up spending 80% of the paid search income on marketing, which was the target all along.

So if either way you hit the desired efficiency target, what’s the difference?

Volume. By wasting money on some terms, and missing opportunities for more sales on others, the poor bidding methodology hits its efficiency objectives but does so at the expense of sales volume.

Important corollary: Lower sales volume at the same efficiency by definition means lower advertising revenues for the engines.

Applying theory to the syndicate networks

So, if what we’ve learned is that applying the same bids to traffic of varying quality leads to fewer sales/leads for the advertiser and less revenue for the engines how does that effect our view of the syndicate partners?

Let’s look at some data. I grabbed data from a handful of our clients and took median values by referring domain for click volume, and order volume as a percentage of the total traffic for each client. I also calculated median values for how each domain’s conversion rate compared to the average conversion rate for that ad network. The comparative conversion rate data is most interesting.

GoogleNetwork
Note: sites are ranked from left to right in descending order of traffic volume for the median client.

Notice that traffic from Google.com, AOL, Amazon, and Comcast tend to send much higher than average quality traffic: 16 – 20% better than average. At the same time, eBay, Ask and the comparison shopping engines, tend to send significantly lower than average quality traffic.

For AdWords advertisers can and should bid differently on the Google.com domain and the rest of the network. Looking at these medians as if they were one advertiser’s data, we’d end up bidding the Google.com-only version of the account up by 16% and the Google.com + syndication network version of the account down by 33% (Google.com traffic is slightly more than 2/3 of the total, hence the disparity).

My sources tell me we’re among the very few folks in the space doing this, which is surprising as we’ve been doing it and advocating it to others for more than two and a half years.

This two-tier approach will help materially, but is in no way ideal bidding. In this, we end up underbidding on AOL and Amazon traffic, and still overbid on eBay, Ask and others. Better, but not great.

If we take a look at the same type of data from the Yahoo network we see even greater disparities.

YahooNetwork

Yahoo.com traffic is 35% better than the average Yahoo ad network traffic! The rest of the network averages about 42% below the aggregate average! But, even among those there are winners and losers.

Yahoo does not give us the opportunity to bid differently on the network, but does give us the opportunity to exclude traffic from particular domains. Given the lower traffic volume on Yahoo, it can be difficult to separate signal from noise to identify those domains that send particularly low quality traffic. Sometimes looking across multiple accounts helps us spot trends that we can’t really see looking at a single account’s data.

But excluding sites isn’t really what we’re after either. In most cases the traffic isn’t worth nothing, it’s just worth less. Paying the right amount for it would allow us to generate sales cost effectively from each domain in the syndicate network and most importantly, would allow us to push the gas harder on the high quality traffic provided by some.

Both Google and Yahoo claim to discount the CPCs from network partners, but our experience suggests the discounts aren’t adequate. Moreover, since any good bidding system bases bids on expected revenues, rather than sunk costs, the discounted CPCs wouldn’t solve the problem even if they were right. The bids might be right for the bad performers because of the discounts, but we’d still under bid on the higher quality traffic.

The obvious best solution for the advertisers and the engines is to allow us to bid differently by domains. Creating separate campaigns for each domain might be too much to manage, but account level percentage adjustments based on each advertisers data would be easy.

Perhaps the hangup lies with publishers. If eBay’s revenue as a syndicate partner dropped by 40% they might use that space for display ads rather than search, and perhaps they’d rent that space through a non-Google exchange. Legacy revenue sharing agreements between the engines and the network partners may also be a barrier.

Whatever the case, we would like to have more control over what we pay for the traffic we get from the syndicate partners. We hope you folks will join us in calling on the engines to make this change!


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

George Michie
Contributor
George Michie is Chief Marketing Scientist of Merkle|RKG, a technology and service leader in paid search, SEO, performance display, social media, and the science of online marketing. He also writes for the RKG Blog. Follow him on Twitter at @georgemichie1.

Get the must-read newsletter for search marketers.