Google, Yahoo & Microsoft Unite On “Canonical Tag” To Reduce Duplicate Content Clutter

The web is full of duplicate content. Search engines try to index and display the original or “canonical” version. Searchers only want to see one version in results. And site owners worry that if search engines find multiple versions of a page, their link credit will be diluted and they’ll lose ranking. Today, Google, Yahoo […]

Chat with SearchBot

The web is full of duplicate content. Search engines try to index and display the original or “canonical” version. Searchers only want to see one version in results. And site owners worry that if search engines find multiple versions of a page, their link credit will be diluted and they’ll lose ranking.

Today, Google, Yahoo and Microsoft (links are to their separate announcements) have united to offer a way to reduce duplicate content clutter and make things easier for everyone. Webmasters rejoice! Worried about duplicate content on your site? Want to know what “canonical” means? Read on for more details.

Multiple URLs, one page

Duplicate content comes in different forms, but a major scenario is multiple URLs that point to the same page. This can come up for lots of reasons. An ecommerce site might allow various sort orders for a page (by lowest price, highest rated…), the marketing department might want tracking codes added to URLs for analytics. You could end up with 100 pages, but 10 URLs for each page. Suddenly search engines have to sort  through 1,000 URLs.

This can be a problem for a couple of reasons.

  • Less of the site may get crawled. Search engine crawlers use a limited amount of bandwidth on each site (based on numerous factors). If the crawler only is able to crawl 100 pages of your site in a single visit, you want it to be 100 unique pages, not 10 pages 10 times each.
  • Each page may not get full link credit. If a page has 10 URLs that point to it, then other sites can link to it 10 different ways. One link to each URL dilutes the value  the page could have if all 10 links pointed to a single URL.

Using the new canonical tag

Specify the canonical version using a tag in the head section of the page as follows:

<link rel="canonical" href="https://www.example.com/product.php?item=swedish-fish"/>

That’s it!

  • You can only use the tag on pages within a single site (subdomains and subfolders are fine).
  • You can use relative or absolute links, but the search engines recommend absolute links.

This tag will operate in a similar way to a 301 redirect for all URLs that display the page with this tag.

  • Links to all URLs will be consolidated to the one specified as canonical.
  • Search engines will consider this URL a “strong hint” as to the one to crawl and index.

Canonical URL best practices

The search engines use this as a hint, not as a directive, (Google calls it a “suggestion that we honor strongly”) but are more likely to use  it if the URLs use best practices, such as:

  • The  content rendered for each URL is very similar or exact
  • The canonical URL is the shortest version
  • The URL uses easy to understand parameter patterns (such as using ? and %)

Can this be abused by spammers? They might try, but Matt Cutts of Google told me that the same safeguards that prevent abuse by other methods (such as redirects) are in place here as well, and that Google  reserves the right to take action on sites that are using the tag to manipulate search engines and violate search engine guidelines.

For instance, this tag will only work with very similar or identical content, so you can’t use it to send all of the link value from the less important pages of your site to the more important ones.

If tags conflict (such as pages point to each other as canonical, the URL specified as canonical redirects to a non-canonical version, or the page specified as canonical doesn’t exist), search engines will sort things out just as they do now, and will determine which URL they think is the best canonical version.

The tag in action

This tag will most often be useful in the case of multiple URLs pointing at the same page, but might also be used when multiple versions of a page exist. For instance, wikia.com is using the tag for previous revisions of a page. Both https://watchmen.wikia.com/index.php?title=Comedian%27s_badge&diff=4901&oldid=4819 and https://watchmen.wikia.com/index.php?title=Comedian%27s_badge&diff=5401&oldid=4901reference the latest version of the article (https://watchmen.wikia.com/wiki/Comedian%27s_badge) as the canonical.

The search engines stress that it’s still important to build good URL structure and also note that if you aren’t able to implement this tag, they’ll still keep the processes they have now to determine the canonical. For instance, at SMX West on Tuesday, Maile Ohye of Google explained how Google can detect patterns in URLs if they use standard parameters. For instance, with these URLs:

  • https://www.example.com/buffy?cat=spike
  • https://www.example.com/buffy?cat=spike&sort=evil
  • https://www.example.com/buffy?cat=spike&sort=good

Maile explained that Google can detect (particularly when looking at patterns across the site) that the sort parameter may order the page differently, but that the URLs with the sort parameter display the same  content as the shorter URL (https://www.example.com/buffy?cat=spike).

While it’s rare for the search engines to join forces, this isn’t the first time they’ve come together on a standard. In November 2006, they came together to support sitemaps.org. And in June 2008 they announced a standard set of robots.txt directives. Matt Cutts of Google and Nathan Buggia of Microsoft told me that they want to help reduce the clutter on the web, and make things easier for searchers as well as site owners.

This new tag won’t completely solve duplicate issues on the web, but it should help make things quite a bit easier particuarly for ecommerce sites, who likely need all the help they can get in the current economic conditions. Site owners have been asking for help with these issues for a really long time so this should be a greatly welcomed addition.

Postscript by Barry Schwartz:

The search engines will be talking about this news at the Ask the Search Engines panel at SMX West. We will be blogging this panel live at the Search Engine Roundtable.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Vanessa Fox
Contributor
Vanessa Fox is a Contributing Editor at Search Engine Land. She built Google Webmaster Central and went on to found software and consulting company Nine By Blue and create Blueprint Search Analytics< which she later sold. Her book, Marketing in the Age of Google, (updated edition, May 2012) provides a foundation for incorporating search strategy into organizations of all levels. Follow her on Twitter at @vanessafox.

Get the must-read newsletter for search marketers.