Sign up for weekly recaps of the ever-changing search marketing landscape.
Rich Snippets & Learning To Love Not Being #1
#1 rankings have a hallowed place in the minds of most SEOs. Many of us use #1 rankings as bragging rights (I’m guilty) or as one of the key metrics for campaign success (guilty again).
But long-term trends in the way search engines rank pages and display results are both changing that. Now, it’s entirely possible to run a successful campaign where none of the target keywords are expected to hit the #1 position.
As search engines evolve from a tool for ranking pages to a tool for driving decisions, this is only going to get more common.
Fortunately, it’s possible to build a campaign around capturing implicit user intents, not just keyword rankings. Reframing campaigns this way will pump up conversion rates and fix a few broken metrics.
The Ongoing SERP Facelift
The most cited, most thorough data we have on search result click-throughs still comes from the notorious AOL data dump in 2006.
Using that information, it was possible to calculate that the #1 result got about 42% of the clicks, while the #10 result got about 3%.
For a while, that estimated amount looked too low: search engines in general got better at deciding what a give user wanted to see when they searched for a specific term, so the #1 result kept getting better.
To a degree, that’s still happening. But search engines now realize that top rankings shouldn’t be a naive rank-ordering of every page that is relevant to a given term—it’s a better user experience.
(One search engine heuristic is minimizing the number of repeat searches someone has to make. Every repeat search means “No, what I meant was X,” and it’s often possible to statistically verify which X they usually mean. Google Suggest is one way to capture this information, but most SERPs are now made up of good results for the term searched, plus good results for the popular variants on the term that show up in suggest.)
As for Bing and Google—yes, Bing is taking the lead, at least on the PR front—are both reframing search in terms of the user intent each search reveals. That’s a powerful change.
Take a query involving a restaurant: a naive ranking might notice that the restaurant has reviews in five different highly-trusted publications, and rank each review ahead of the official site (especially if the site’s in flash).
But an intent-based view would identify several reasons someone might search for a restaurant:
- They may be looking for the official site.
- They might want to see a menu (so Menupages gets a boost).
- They might want to make a reservation (OpenTable gets this one).
- They might want to see a review (so it’s a toss-up: Yelp, local news outlets, or some combination thereof).
- They might be looking for directions—search engines tend to just use their own map product for this.
Suddenly, there’s no such thing as being #1 for a single keyword. Now, you can be #1 for any one of several user intents: for some queries, there’s a “#1” result below the fold.
Picking The Keywords & Targets That Work
How can you tell if one search is satisfying multiple intents? There’s an insanely simple heuristic, though it takes a little legwork. It’s a quick two-step process for every keyword you’re targeting. Type the term into Google, and:
- Check the suggestions—does one of them match the intent you’re going for?. Great, then:
- Check the search results: do you see at least one result fitting that intent?
After that, it’s standard keyword research: if the site matching your intent is a nationally-recognized brand, it’s going to be tough to outrank them for that intent.
If the site that ranks is someone you’ve never heard of, with weak content and poor SEO, you have a decent shot. But you don’t just have to use keyword targeting. Within the search results page, metadata gives you tools that can help ramp up clickthroughs. In fact, that’s exactly what search engines want you to do.
What To Do: Using Intent & Metadata
This might sound like giving up on promising keywords. But another way to look at it is that it’s an opportunity: if you’re targeting a specific need that a keyword might represent, you can essentially optimize for the relevant variant on that term.
That gives you the freedom to craft your page’s search engine presence so it specifically turns off some users, if it turns other users on.
There are two obvious courses of action here: refine your metrics, and start using your page’s appearance on the SERP to match user intents, not just keywords.
On the metrics side, this means setting expectations: if you’re not targeting the #1 user intent, you won’t rank #1. But if you did rank #1 while targeting the second most popular intent, most of the extra traffic you’d get would be worthless: you’d be getting lots of bounces, and some confused users, but not a lot of business.
On the campaigning side, there’s a lot to do—and more every day. Schema.org is basically an effort to get the whole SEO community on board with the idea of ranking for intents, not keywords. So a single review looks different from a review site, which looks different from an official site, which looks different from a reservation-taking site.
Beyond Schema.org, you can modify your site’s copy to fit this paradigm, too. Bing calls it “the web of verbs.” And you can run with that in a literal sense: if your meta description starts with the literal verb that your site applies to the keyword noun (e.g. [Read reviews of] + [Restaurant], or [compare rates on] + [financial product]), you’ll capture clicks from the right audience.
As more search engines use click data to refine rankings, your initial positioning will have some staying power.
Photo nesting dolls from Joel75. Used under Creative Commons license.
Some opinions expressed in this article may be those of a guest author and not necessarily Search Engine Land. Staff authors are listed here.