Google Search Quality Raters Instructions Gain New “Page Quality” Guidelines

google-g-logo-2012In the wake of Google’s Panda algorithm update, its cadre of human search quality raters has a new task: giving Google specific quality ratings for individual landing pages.

The new “Page Quality Rating Guidelines” section adds a whopping 32 pages to the handbook that Google provides to those human raters (via contractors like Lionbridge and Leapforce). An existing section on “URL Rating Tasks with User Locations” has been expanded from 12 to 16 pages, so that what was a 125-page document in early 2011 is now 161 pages long.

Version 3.27 of Google’s guidelines for search quality raters — dated June 22, 2012 — was recently leaked online. It’s actually one part of a larger leak that seems to have happened via private forums, and includes not only the rater guidelines, but also additional information and screenshots from some of the ratings tasks that the group performs. More on that later. Below, a look at the new section of the rater’s handbook, which was written about yesterday by Razvan Gavrilas on CognitiveSEO.com.

Google’s Page Quality Rating Guidelines

Here’s how Google explains the concept of Page Quality to its raters:

You have probably noticed that webpages vary in quality. There are high quality pages: pages that are well written, trustworthy, organized, entertaining, enjoyable, beautiful, compelling, etc. You have probably also found pages that seem poorly written, unreliable, poorly organized, unhelpful, shallow, or even deceptive or malicious. We would like to capture these observations in Page Quality rating.

Raters are asked to give each landing page (or web document like PDFs) an overall grade that might include Highest, High, Medium, Low or Lowest, but individual aspects of the page/document — like “main content” and “layout” are also graded, as shown on this screenshot from the guidelines:

page-quality-ratings

Raters are told to ignore the location of the page when rating its quality and to avoid thinking about how helpful the page/document might be for a search query. “Page Quality rating is query-independent, meaning that the rating you assign does not depend on a query,” Google says. That’s because, the guidelines later say, queries help determine the overall utility of a page/document, but the purpose of a page helps determine its quality.

Google: Low or Lowest Quality Main Content (Pandas!)

There’s a humorous example in the new material that you have to think Google used purposely. In teaching how to identify “Low or Lowest Quality Main Content” on a page, one of Google’s bullet items is

Using a lot of words to communicate only basic ideas or facts (“Pandas eat bamboo. Pandas eat a lot of bamboo. It’s the best food for a Panda bear.”)

If that isn’t one of the hallmarks of a page that you’d find on a content farm, what is, right? Content farms were a primary target of the Panda algorithm change — an update that aimed to remove “thin” or low quality pages from Google’s index. Now, this new 32-page section of the Google search quality raters guidelines shows how Google grades page quality. It might be the closest thing available to an official, post-Panda quality content guide.

Google: High or Highest Quality Main Content

There are no panda (the animal, not the algo update) references in Google’s section describing “High or Highest Quality Main Content.” The four bullet items in this section offer examples related to medical content (“written by people or organizations with medical accreditation”), hobbies (“highest quality content is produced by those with a lot of knowledge and experience who then spend time and effort creating content to share with others who have similar interests”), videos and even social networking pages.

On the last point, Google says a social networking profile can be high/highest quality if it’s “frequently updated with lots of posts, social connections, comments by friends, links to cool stuff, etc.” That’s an important thing to consider in light of the mess earlier this over Google’s Search Plus Your World feature — specifically, the accusations that Google was favoring even unused Google+ profiles over active Facebook, Twitter and other social profiles in its search results. (Google stopped doing that to some degree when it put Knowledge Graph results where Google+ profiles used to appear.)

Also keep in mind that Google announced, in its March search quality update, that it was doing a better job of indexing social profile pages. Quality, as described in this raters’ document, would clearly play a role in that.

Website Reputation as a Page Quality Signal

In addition to the Panda reference above, there’s another well-known event that appears in the new section on page quality. On the screenshot above, there’s a question near the bottom where human raters are asked to indicate the reputation of the website that’s associated with the page/document that they’re reviewing.

Google goes into great detail to teach raters how to discover a website’s reputation, and one of the suggestions is to do searches like [homepage reviews] and [homepage.com complaints].

The example site that Google uses? DecorMyEyes.com.

That’s the site that made headlines in late 2010 when its owner, Vitaly Borker, bragged to the New York Times that he went out of his way to harass customers because he believed that their bad reviews posted online were improving his Google search results. (Borker was just sentenced yesterday to four years in prison and ordered to pay almost $100,000 in fines and restitution.)

What Else Has Leaked?

As I said above, the 161-page raters’ guidelines isn’t the only thing that has leaked recently. Additional documents and screenshots related to the human rater program reportedly began spreading last month via private forums. Gordon Campbell, a UK-based online marketer, recently contacted us with a link to his blog post showing screenshots of some of the side-by-side tasks that raters perform — in this case, a “basic” and an “advanced” side-by-side task. The basic task screenshot is first, followed by the advanced task.

sidebyside1

sidebyside2

Side-by-side tasks ask raters to compare either individual (basic) or groups (advanced) of search results based on a given query. They’re also used to compare Google’s current search results with “new” results that may be used after an algorithm change. I recently contacted the search quality rater that I interviewed earlier this year with a link to Gordon Campbell’s post and screenshots, and the rater confirmed that those are legitimate screenshots of a side-by-side task.

Depending on how widely the other material has leaked, we may be seeing additional blog posts around the web related to Google’s human search quality raters, the new guidelines document and the rating tasks they do.

Related Topics: Channel: SEO | Features: Analysis | Google: Search Quality Raters | Google: SEO | Google: Web Search | SEO: General | Top News

Sponsored


About The Author: is Editor-In-Chief of Search Engine Land. His news career includes time spent in TV, radio, and print journalism. His web career continues to include a small number of SEO and social media consulting clients, as well as regular speaking engagements at marketing events around the U.S. He recently launched a site dedicated to Google Glass called Glass Almanac and also blogs at Small Business Search Marketing. Matt can be found on Twitter at @MattMcGee and/or on Google Plus. You can read Matt's disclosures on his personal blog.

Connect with the author via: Email | Twitter | Google+ | LinkedIn



SearchCap:

Get all the top search stories emailed daily!  

Share

Other ways to share:
 

Read before commenting! We welcome constructive comments and allow any that meet our common sense criteria. This means being respectful and polite to others. It means providing helpful information that contributes to a story or discussion. It means leaving links only that substantially add further to a discussion. Comments using foul language, being disrespectful to others or otherwise violating what we believe are common sense standards of discussion will be deleted. Comments may also be removed if they are posted from anonymous accounts. You can read more about our comments policy here.
  • http://twitter.com/SEODIRECT4U The SEO Geeks

    Very insightful thanks for sharing it with us all.

  • http://www.seo-theory.com/ Michael Martinez

    It’s an interesting leak but people who think they will be able to tie these guidelines to specific algorithm tweaks and updates are in for a rude awakening. It’s not that simple.

  • Riaan Aggenbag

    Good stuff guys! Who don’t appreciate a little inside information on Google? Can’t wait to see if more is found :)

  • http://www.facebook.com/people/Codex-Meridian/100002285341528 Codex Meridian

    What if quality raters are not expert themselves, are they in the proper position to judge expert content or websites? I found lots of sites in the Internet that are not ranking in Google but ranking well in Bing. The site don’t look very well (dirty design, poor design, website navigation is horrible, etc), but the content is superb.

    What is surprising is that author does not actively participate in social media. Not so much Facebook fans, no Google+ page or Facebook like box, no Twitter connections, etc. The author is introvert. But I do really love the content, the message and the technicality/techniques being employed. I learned a lot by simply reading and get hooked.

    The author is simply an expert. I know that because I am directly practising that profession and made me to judge the quality of the content properly regardless of other factors such as the website looks, links and even its reputation.

    This is simply the worst flaws of the Panda algorithm, and in no way no matter how brilliant the engineers in Google can rank better websites if they based their guidelines based on the above screenshots as shown by Matt Gee.

    Probably that’s the simple reason why I don’t love searching in Google these days. I simply do not like the quality of the search results, I am wondering why but until now I know some partial reason. Probably these quality raters do not have the expertise themselves to rate expert websites. Don’t expect quality results if the measurement system employed is wrong/faulty or inaccurate.

    It takes more than an eye to judge the overall quality of the website. As from the book of Little Prince, I learn this quote:

    “It is only with the heart that one can see rightly; What is essential is invisible to the eye”

  • http://www.facebook.com/profile.php?id=1357682611 Gordon Campbell

    Hi Codex, that’s an interesting point and one that I agree on. However, I would imagine that several quality raters will review each site and then an average result will be taken.

  • http://www.webstatsart.com/ Webstats Art

    As usual, the leaked documents show that google never gives examples of different quality ratings which means the topic is still subjective. SEO’s never give actual examples either which means the raters are not scientifically trained. They are probably low paid stenographers.

  • http://www.eyes4tech.com/ Arsie

    I do agree on your point here codex.

  • daveintheuk

    This is all very well, but when we see these rules applied to Google’s own properties? How many “Knowledge Graph” units supply anything other than scraped data? How many blank Google+ Local pages with nothing but an address on would be considered of good quality?

    In fact, how would Google’s own SERPs fair now? Dominated by adverts, self interested results and scraped content…

  • http://www.winsonyeung.com/ Winson Yeung

    agree with you too

  • http://www.facebook.com/profile.php?id=1357682611 Gordon Campbell

    That’s a good point Dave and I actually carried out a mock test using Google’s own guidelines and done a blog post on it this morning.

  • http://twitter.com/Kevin_Lee_QED Kevin Lee

    Of course the funniest scenario would be where rater’s comments are given little if any weight. ;-) Actually, were I a Google engineer, I’d use the entire ratings platform simply as a way to make my algo’s smarter, and less as a way of influencing rank of specific rated pages.

  • Tom Houy

    I’m surprised this hasn’t surfaced sooner. It was relatively easy to obtain. Regarding the concerns that the quality raters may not be subject matter experts, the same could be said for marketers too though. There is some good material in here, such as how to think about and classify your keyword lists in your paid search campaigns and pair them with appropriate ad copy that matches up the type of user intent.

  • http://www.facebook.com/revaminkoff Reva Minkoff

    Two thoughts:
    1. It will be interesting to see if the weight of the human raters ratings increases now that the criteria has lengthened and become more elaborate. Anyone have any insight accordingly?
    2. Is this criteria just for ads or is it having any effect on SEO as well?

  • Janeth Duque

    We don’t know what Google is doing with the information or rather it’s even used. It’s actually a pretty good guide for building a website.

  • http://avgjoegeek.net/ avgjoegeek

    I think the guide is just that a great guide especially for the new blogger to avoid getting hit by Panda. But it is still up to the site owner to make sure they actually have great solid content!

    What is funny is 99% of this is all common sense. Just use your head and don’t try and “game” the system.

    Still an interesting read!

  • http://twitter.com/techgurus Tech Gurus

    Interesting that this proprietary information was leaked. I wonder if Search Engine Land will suffer as a result?

  • http://twitter.com/chillingbreeze Sunita Biddu

    There are actually high chances of having biased quality raters looking at what they find of high value and what not, specially about content…

    In such case, pages/posts with good user discussions stand higher chances. But at the same time, I see lot of great pages/posts keep their comments off to avoid unnecessary spam. Social shares may give better weight to such landing pages.

    So much to discuss and question about the quality rating of any landing page when it comes to a realistic rating.

  • Ryan Webb

    Kevin – I think you just hit it on the head. There is no way in hell that these “human rater” ratings are not being collated, organized, analyzed, and somehow implemented into the algo itself… I believe that to be the “purpose behind the purpose”…. which may or may not be a bad thing –

  • Thomas Møller Nexø

    Great article!

    The problem is that even Google seem to have forgotten about these quality guidelines http://www.winfrastructure.net/article.aspx?BlogEntry=Contradictions-in-Googles-policy-for-search-and-the-content-displayed

Get Our News, Everywhere!

Daily Email:

Follow Search Engine Land on Twitter @sengineland Like Search Engine Land on Facebook Follow Search Engine Land on Google+ Get the Search Engine Land Feed Connect with Search Engine Land on LinkedIn Check out our Tumblr! See us on Pinterest

 
 

Click to watch SMX conference video

Join us at one of our SMX or MarTech events:

United States

Europe

Australia & China

Learn more about: SMX | MarTech


Free Daily Search News Recap!

SearchCap is a once-per-day newsletter update - sign up below and get the news delivered to you!

 


 

Search Engine Land Periodic Table of SEO Success Factors

Get Your Copy
Read The Full SEO Guide