The Mysteries of Ad Quality Revealed (Yet Again)

Perhaps it was lucky fate that Anders Hjorth (of Relevant Traffic, a European SEM agency) and I didn’t have too much time to compare notes going into the panel “Tuning Ads for Quality Score” at SMX Stockholm last week. Fortunately, the presentations didn’t overlap too much. The interesting thing is the overlapping conclusions we drew […]

Chat with SearchBot


Paid
Search - A Column From Search Engine Land

Perhaps it was lucky fate that Anders Hjorth (of Relevant Traffic, a European SEM agency) and I didn’t have too much time to compare notes going into the panel “Tuning Ads for Quality Score” at SMX Stockholm last week. Fortunately, the presentations didn’t overlap too much. The interesting thing is the overlapping conclusions we drew from what appear to be similar client experiences.

Here I’d like to give you yet another update on quality-based bidding based on common conclusions drawn by detailed analyses such as mine and Anders’ (and where they overlap), look at popular perceptions of Google’s initiative as gleaned from talking with typical attendees, and finally, look at how the overall task of paid search stacks up against the strategies typically required of those vying to rank higher on the organic side.

Overlapping conclusions

Whether you call it reverse-engineering or just plain observation, Anders and I agree that there is an extremely strong correlation between CTR and overall quality score, as expressed by minimum bids in the AdWords interface. This corroborates perhaps even to a stronger extent than we expected the early comments we received from various AdWords product developers that the formula “predominantly” incorporates CTR.

What this means is a stark reminder that the anomalies are what most often get discussed in the wild.

First, new accounts have no historical CTR to go on, so Google looks at keyword relevance signals in a predictive way. That could mean a long, 5-8 week, “dig out” period if you start an account off on the wrong foot. Among other things, it means we need to counsel excessive diligence with setting up new accounts, including futzing with the dreaded negative keywords to boost CTR’s.

Second, other problems you might typically see are related to website or landing page quality. Those are really separate issues that require separate discussion. Google, for some reason, may not like your site or business model, but as long as you pass that hurdle, the remaining ranking factors are not so mysterious.

And finally, some keywords are just different. Either they are more likely to give you low quality scores period (it’s hard for a commercial advertiser to be relevant on them, Google feels they are controversial, and other hidden and proprietary reasons Google may have to downgrade the quality on certain classes of words), or they have different industry averages, so what counts as a “high” CTR in one industry counts as a low one in another, and vice-versa. In these cases, unless you put good CTR’s on the board over time, you’ll be stuck in “poor quality score limbo” forever.

As I mention all these complications, it’s important to realize that quality scores generally come back to CTR, if all goes well. And the majority of the time, all does go well. Google likes high CTR’s, and that hasn’t changed since 2002.

Perhaps because my consulting firm has been a bit more open to trying out unusual projects, I did disagree with some conclusions drawn by some attendees to this panel. Many people are completely unaware of the “anomalous business models” that Google just plain hates, and assume there is always a strategy to work around poor quality scores. Not always so. We took on an account that focused heavily on data collection, and Google simply won’t let this stuff get off the ground. It’s more trouble than it’s worth to try to fight City Hall on some accounts. It’s not just a ‘bot that isn’t into your landing page; it’s likely manual intervention by editors who are unlikely to change their opinion of your site.

(Now that might lead me to joke that there are two ways you can bribe Google, by slipping an editor a bag of cash, or by just paying the inflated minimum bids, but such a joke would be inaccurate. Since the editors are unseen and it’s never acknowledged that anyone in particular saw or adjusted your quality score, paying the inflated minimum bid is in fact the only way to activate your ads in such situations, unless you can somehow get word through the channels to re-evaluate your situation. But this latter wouldn’t give you a good guide as to who to send the big bag of cash to.)

Popular perceptions of quality-based bidding

You’ll have noticed that this last parenthetical reference begins to lead down a rather cynical path. Trust me, after speaking frankly with audience members at the SMX conference, I discovered my cynicism levels are low compared with many! A reading of the temperature in advertiser-land suggests that many in the SEM community do not believe in the letter of what Google says about their quality assessment process.

Google has explicitly told us that they do not incorporate variables like length of advertiser tenure or total ad spend in the quality score formula, because these are perverse incentives that have nothing to do with relevance/quality from the standpoint of the user. Audience members at the SMX conference doubted this claim. By manually adjusting the quality on important (read: big) accounts, some argued, Google staff can essentially green-light them unless they do major evil.

I’m not, and audience members were not, even suggesting that’s a bad thing. Large brand advertisers and those who represent them do not want to be “hassled” with quality score issues as determined by some ‘bot or formula. Let’s say a site is not very navigable and offers what in general is seen to be a poor user experience (100% written in flash, inaccessible to AdsBot, and so forth), and this might affect an “unknown” advertiser with some slight increases in minimum bid. The exact same thing might not affect a brand advertiser with a big budget. Adhering to the letter of policy, Google staff can always say something along the lines that high quality accounts get an “account overlay” variable that rewards a whole account for high historical CTR’s, even if some parts aren’t up to snuff. Well, you could take that logic to another level, if the big advertiser has multiple accounts in an MCC. What’s to say that the “account wide overlay” doesn’t apply to MCC’s, such that a set of accounts with high quality across the board can exempt a couple of low-quality ones in the same stable?

All else being equal, there is nothing to say that Google won’t “greenlight” some advertiser that is a world class brand, to ensure that world class brands remain happy advertisers and to ensure that users see world class brands in the advertising spaces on Google.com. There is also nothing that says that Google can’t go ahead and remove that favorable treatment at some later point, if they feel like squeezing more revenue out of these obviously deep-pocketed advertisers. Where is it written that whole accounts can’t be simply “greenlit” and “yellow-lit” at will? Perhaps this isn’t the way it always works, but until the system becomes simpler or more transparent, there is that lingering doubt.

Calling this “cynical” would in fact be unfair, if it were actually 100% true. So, I’ll take the overlapping consensus as fairly savvy on this one too. I have my suspicions that there is a fair amount of manual tweaking going on; so do most advertisers. Can we all be wrong?

How elaborate is the strategy?

Smart folks on the SEO side of the business are a bit ahead in terms of gauging the “overlapping consensus” about ranking factors in Google’s algorithm. This only makes sense; paid search advertisers have only had to worry about the presence of a multitude ranking factors since August 2005, and landing page quality only since December 2005.

[While an enterprise-strength SEO strategist might quibble with some questions of emphasis on the 93 factors mentioned on SEOmoz, organize them into actionable bundles, and add factors not included on the list, fair enough – there are a lot of factors and the consensus as to how important they are can’t be all wrong.]

In terms of strategy, it might be worth paying more than lip service to the observations made by Google spokespeople and product architects to the effect that their thinking about relevance and quality is similar (not the same, but similar) on the paid and organic sides.

Let’s think in big terms of four of the challenges or macro ranking factors facing SEO’s: on-page factors, external reputation, site architecture and user experience, and behavioral elements (not well covered on the SEOmoz list). If they’re important to ranking on the organic side, how tough is your job on these issues on the paid side?

  • On-page factors. In terms of on-page factors like headings, keywords in body copy, and the like, paid search advertisers still get off easily. AdsBot may scan your landing pages looking to see that they work and that there is at least some text on the page, but it’s unclear just how relevant you have to be here. The fact is there may be between a couple and a few hundred potential advertisers vying for position on a certain keyword or overlapping keywords. That’s a lot fewer than the hundreds of thousands or millions of web pages competing for rank on many queries. Minor differences in keyword placement and emphasis are simply non-issues for the AdsBot. You’re not being held to extreme account here; in fact, a page with a product title and a picture of the product – with no description – might not hurt you, at least in ad ranking terms.
  • External reputation factors are getting more and more complex; search engines are increasingly looking for clues beyond just the structure of inbound links. Some of these techniques are being revealed in patents (should you have some time on your hands). It must be stressed again that the task of weeding out organic index spam is much, much, tougher than weeding out rogue advertisers. The sheer volume of index spam is high, so the engines have to take a defensive posture and come up with rough justice ways of keeping the offenders out (that’s why the age of a domain, and the age of a link, seem to matter so much; they’re imperfect but effective ways of keeping spammers at bay). Unless I’m completely off my rocker and Google is failing to disclose vital information, at this time, such factors as PageRank and external reputation generally are, at most, only an infinitesimal part of the ad ranking formula. Here, then, advertisers can sleep tight and safely ignore the impact of such things on their rankings. This might change, however. Anchor text on external links seems like it would be a good way to assess a site’s general meaning and quality or credibility.
  • On the macro bucket of site architecture and user experience, the fact that extremely negative user experiences count heavily against paid search advertisers should not be lost on SEO’s. Many of the pages and sites that are given poor quality scores on the paid search side would have a snowball’s chance in hell of ranking on the organic side, so there is that commonality. But take it a step further. Many SEO’s put so much effort into the details of keywords and overt architectural nuances, like keywords in filenames, that they forget the pain a user feels when a site is slow to load, or when the weight of advertising relative to content is skewed to a ridiculous extent. Think search engines don’t take any account of such matters on the organic side? Think again.
  • Behavioral/clickstream data. Finally, and only finally because I’ve already used up 1,750 words, paid search advertisers are held to relatively extreme account based on how often their listings are clicked. Secondarily, they might get into some trouble if post-click patterns (such as very short visits followed by the back button, etc.) don’t indicate a positive user experience on the website. By contrast, that isn’t generally seen to be an issue in the organic world, but maybe it should be upgraded a couple of notches in people’s thinking. These clickstream and other behavioral factors are, many believe, a fairly important part of the organic ranking picture as well. Yet many organic SEO’s blithely ignore user experience issues and display ignorance when it comes to the clickstream and behavioral data Google has at their disposal.

The fact that Google acknowledges similarities in their thinking on paid and organic search is just the beginning. The real fun starts when you start to dig into the details of exactly what they’re looking at. Take a look the logic on both sides and recognize that you, too, should un-silo your strategy where appropriate. But don’t get too stressed out about variables that seem to have minor impact in comparison with the biggest ranking factors (such as CTR and bid, on the paid search side).

Andrew Goodman is the founder and principal of Page Zero Media and author of Winning
Results with Google AdWords
. The Paid Search
column appears Tuesdays at Search
Engine Land
.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Andrew Goodman
Contributor
Andrew Goodman is founder and President of Toronto-based Page Zero Media, a full-service agency obsessed with PPC performance since 2002. Current clients include Well.ca, Princess Auto, and Nuts.com. Andrew wrote 2 editions of Winning Results With Google AdWords (McGraw-Hill), a pioneering book on PPC strategy and tactics. He continues to speak regularly at SMX and other events. He was also an adviser to (and later, co-founder of) a consumer review startup, Toronto-based HomeStars, acquired by HomeAdvisor in 2017.

Get the must-read newsletter for search marketers.