It’s been well established that a core—predominant, even—component of the Google AdWords quality-based bidding formula is historical CTR (clickthrough rate). New keywords get treated a bit differently, though; the algorithm must predict CTR based on the similarities of your ad group’s characteristics to historical data in Google’s vault.
If you read through Google’s official descriptions, you’ll also see references to “other relevancy factors.” What are these mysterious factors? Presumably they amount to various facets of your campaign matching the user’s search intent. Basic relevancy stuff (at least, basic if you’ve been around the search game awhile).
They aren’t fully defined or disclosed anywhere, but one thing’s fairly clear: reading the official FAQ’s can only take you so far. In-the-trenches experts tend to develop a feel for what works, and why. Some folks even take it the extra mile and try to reverse-engineer the whole contraption. To me, a lot of them take it about a mile too far. I think focusing too heavily on the “what” as opposed to the “why” will lead you down blind alleys. The “why” is how we make sense of things.
Beware of expertise that only covers non-exceptions
Some practitioners in the business can boast direct experience with millions of dollars’ worth of ad spend, but the snag is that if they’re always working with the same kinds of accounts—let’s say conventional, respectable, large-scale catalog retail—they’re missing out on the fun and frolic that occurs in accounts that don’t so easily attain “Great” quality scores.
Other shops, like mine, have had the enormous pleasure of working with such “unproblematic” accounts, to be sure, but we’ve seen our share of offbeat exceptions, too. That kind of diversity is why even my competitors call me “the world’s foremost AdWords Quality Score expert!” Now I realize they’re saying that sarcastically to knock the wind out of my sails a bit, but if I can fight through the competitive ribbing and good-natured sarcasm, let’s see if there isn’t some insight left.
Initial quality scores—can we learn from them?
I’ve talked about the exceptions (the things that get whacked with poor quality scores) so much in conferences and seminars that I’m getting a bit discouraged by it. I swear, some of Google’s biggest enemies want to hire me to work on their campaigns (unsuccessfully) for awhile just so we can cry in our beer together. Professional Empathizer for Hire: Fee $10,000 a week—that’s a new shingle I’d like to hang out.
I can even empathize with myself. Recently, I tweaked some ads in my own longstanding B2B campaign—because we relaunched our website. These are keywords I was perfectly willing to pay $2-3 for. Whack! Suddenly everything’s at ten bucks minimum bid! My quality is “poor” and due to the geotargeting in the campaign, the keyword status tool won’t let me investigate whether there is something wrong with the landing page or whether it’s the keywords. I know enough to know that the former would be a big problem (because someone at Google would probably be actively hating our site), and the latter, well, that would just be a mistake, as the keywords were highly relevant.
Rather than bother my long-standing agency rep contacts, I thought I might try clicking on the automated support request for that account. A hilarious exchange ensued with a diffident rookie rep spewing boilerplate homilies about relevancy. Aha! So that’s how you get treated if you’re an “ordinary” AdWords advertiser. There’s a problem. Most advertisers wouldn’t understand how to escalate the issue. It’s easy for me—I can write this column and get it off my chest.
But that’s depressing stuff. Let’s see if there’s something more instructive in the “Great” quality campaigns I’ve set up more recently.
The key point here is that you can see your minimum bids and keyword quality status instantly upon setting up ad groups. Chalk another one up for the paid search laboratory. When the scores come back “Great,” especially for an unusual, newer, non-retail type site, I figure there must be something positive to learn.
Keyword relevancy and overall navigational consistency
I finally got budget clearance to resume building AdWords traffic for a favorite client, a content site in the home improvement space. Because I own a piece of the company, I have some incentive to get in there and build it myself. I’ve seen so many initially poor quality scores for a variety of accounts in the past few months, I decided to be as careful as possible and execute the type of advice I so blithely give to others but all too rarely have the chance to execute for something I “own.”
Step one, of course, was to have a superior landing page strategy. The site lends itself to very targeted pages in a coherent information architecture. There is meaty content on these pages and they are well-labeled. The key would be to send visitors to highly granular landing pages only.
Step two was to hand-build the ads, including granular topical keywords in title and body copy, as well as some geo-specific cues that matched up with the custom metropolitan-area geotargeting I’d set up with the campaign.
Step three was key: start with highly targeted, commercially-relevant keywords. If there’s one thing I know, it’s that setting up really broad words, or tossing in all the keywords suggested by a keyword tool, is a great way to develop low quality in a hurry, even if you don’t get slapped with it at first. Why not tighten down and just try to cherry-pick visitors who are going to be the most targeted ones for these landing pages? Among other things, this would raise conversion rates to desired actions and annoy fewer people. What’s interesting here is that these are the visitors who might click and use your site in such a way as to build up strong quality scores for you over time; but somehow Google is getting better at predicting just this even when there is no data.
These may seem like obvious points, though. Putting account history aside (this one was so-so from past efforts), why did I see “Great” for so many keywords and for a brand new campaign, when so many similar campaigns start out in the high end of OK, trending towards Poor? There must be a few things about the website that the AdsBot likes.
And if you did all three steps above and start with a Poor quality score anyway, chances are very high that Google considers you evil in some way, at some broader level. Are you? Be honest. No? You’re good? Then you perhaps got caught in some kind of false positive—like setting up your groups when the site was down—and are also caught in an exchange with an unsympathetic AdWords rep who doesn’t seem to want to understand the problem. I empathize.
What does that little AdsBot critter look at, anyway?
Experienced advertisers recall that as you set up ad groups, that jaunty set of multicolored balls dances across your screen as you’re informed, “We want to be sure your website is functional when a user clicks your ad. We’re also making sure your ad text complies with our Editorial Guidelines. This can take several seconds. You’ll be taken to the next page when we’re done.”
I set up a new ad group just now, to make sure I had the verbatim quote. It took twelve seconds, in this case. (Extremely astute experts may well know a whole lot more about AdsBot. Warning: I’m not the “world’s foremost AdsBot expert.” If it ever comes to that, it’s time to take up a new profession outside of marketing.)
Making sure the site is up? Checking the ad text for violations? Twelve seconds?
What else is AdsBot doing, do you suppose? In terms of landing page and website quality guidelines, the bot could be doing anything from checking to see if there are specific signals of evil on the landing page, to evidence of broader evil being done by your company or website(s). AdsBot doesn’t say.
Certainly, it would have been impolite to say: “We want to be sure you aren’t evil. This can take several seconds or a lifetime, depending on your degree of evilness and other relevancy factors. By the way, your shoelace is untied.” Sometimes, the less said, the better.
But I certainly liked what AdsBot had to say in this recent case. Your keyword quality is Grrreat!
Here are a few theories as to why. Google may have data about the website as a whole that indicates real user satisfaction, or some kind of vibrant community. (Pssst—they run a search engine, too. I don’t think they put PageRank in there, but that’s likely because PageRank isn’t particularly cutting-edge.)
The AdsBot, or Google in general, might also find the semantic meaning of your landing page understandable in the context of a good site architecture: more than just body copy, the site drills down nicely to the landing page in question, with good quality headings, title tags, well-formed keyword-rich URLs, and breadcrumb navigation.
No red flags were found to derail this happy picture. For example, there aren’t tons of text link ads on the site, so the goal isn’t pure arbitrage. We haven’t registered a bunch of domains, hoping to map out some kind of ill-conceived “cookie-cutter campaign” strategy, and our company information is verifiable in our domain record. We aren’t part of any kind of “link farm.” (That’s just the initial “cut” at quality. The data that builds up from there [such as low CTR], or editorial interventions, could sink your quality score like a stone.)
I could go on.
But I think this gives you an idea of the types of things the AdsBot could be looking at initially, in Google’s quest to weed out bad guys, and to give a little boost to advertisers who do those little extra things to improve relevancy and the user experience.
Great quality scores are possible.
That’s tough to say without sounding smug, because a perfectly respectable site or keyword can land in the quality doghouse, too. I empathize, brothers/sisters. I’ll even go for a beer with you. You’re buying.
Opinions expressed in the article are those of the guest author and not necessarily Search Engine Land.