How reputation became a major ranking factor in SEO

To stay on top today, you need to be diligent in assessing your site's usability and the quality of its content.

Chat with SearchBot

Over 10 years ago, I predicted that Google could use quality scoring for organic rankings, and I also proposed a number of ways they could quantify the quality of websites and specific factors that could be vital for this. The recent core algorithm updates and Medic Update over the past year, and publication of the Quality Rating Guidelines, are largely indicating that a business’s reputation is also key to this as well. If you’re interested in how this all may function, read on.

I first predicted Google might apply a Quality Score to organic search in 2007, and in subsequent years I highlighted the need for: About Us pages so people can know who is behind a website; good Contact Us pages to instill trust; good usability and user-experience; copyright statements; and good spelling and grammar. I have predicted enough of these factors to still see the direction of where the algorithm evolution has been heading.

The most recent edition of Google’s “Quality Evaluators Guidelines”, also called “Quality Rating Guidelines” (“QRG”) is nearly a script of my past recommendations involving quality factors.

Google Quality Rating guidelines seem relevant to recent updates

It seems clear that quality factors, such as those listed in the QRG, have become more influential in Google search rankings.

You may already be saying, “oh, no – he is barking up the wrong tree about the Quality Guidelines”, since others have written about them, too, since they were released back in July (as well as with earlier “leaked” releases), and Googler’s have stated that the raters’ scores are not used directly in websites’ rankings. Notably, Jennifer Slegg reported that Danny Sullivan confirmed that the ratings of the human evaluators are not used in machine learning for the algorithms, replying on Twitter, “We don’t use it that way.” He also noted that rater data is used like restaurants using feedback cards so that they know if their search “recipes” are working:

Image1

Others have pointed out specific things mentioned in the Guidelines as likely signals, particularly involving Expertise, Authority and Trust (Google cites those as “E-A-T”). For instance, Marie Haynes called-out how she felt that elements mentioned in the Quality Raters Guidelines might be influential ranking factors, such as a business’s BBB ranking and authors’ reputations. Others criticized her for citing the BBB rating and author reputations as a direct ranking factor, and subsequently, John Mueller at Google essentially states that Google was not researching author’s reputations nor using proprietary rating scores like the BBB rating grade.

At the same time, Googlers have increasingly been advising webmasters to “just focus on quality,” and even recommending that webmasters read the QRG in offering the best content possible — this is what Danny Sullivan did when he officially commented upon last year’s core algorithm updates in October:

Image3

And, Ben Gomes, Google’s VP of Search, Assistant and News, in an interview last year stated:

“You can view the rater guidelines as where we want the search algorithm to go. They don’t tell you how the algorithm is ranking results, but they fundamentally show what the algorithm should do.”

So, it rather begs the question – how exactly is Google determining Quality algorithmically, involving somewhat subjective-seeming concepts of expertise, authoritativeness, trustworthiness and reputation? Google’s algorithms must translate these concepts into quantifiable criteria that can be measured and compared between competing sites/pages.

I think that some of Google’s past algorithmic development likely points towards the process, and I think that there has been some degree of disconnect between what people have asked, how various Googlers have answered and how people have interpreted those answers. Much of the conjecture has seemed far too reductionist, focusing on relatively naive, theoretical factors. Factors that Googlers have directly denied, in some instances.

If Google instructs its human raters to assess E-A-T of a site but does not incorporate the resulting ratings, what is the algorithm using? Simply stating that the algo uses a collection of things like a BBB rating, user reviews, or link-trust analysis seems far too limited. Likewise, stating that Google comes up with a quality assessment based purely upon link trust analysis and query analysis seems too limited – Google is obviously taking into account some factors beyond just more advanced link/query analysis, although that is also some part of the mix.

Google’s website quality patent

One of Google’s past patents seems to me to describe what they could be using, or something quite similar, and it relies upon machine learning. It’s “Website Quality Signal Generation,” and it was noted by Bill Slawski in a brief synopsis and commentary in 2013 when the patent was granted. It describes how humans could be used to rate the quality of websites, and then an analysis algorithm could associate those ratings with website signals – likely automatically identifying relationships between quantified signals and the human rating values – and thus generate models from the characteristic signals. These signal models could then be used to compare against other unrated websites to apply a quality score to them. The wording is quite fascinating:

“Raters (e.g., people) connect to the Internet to view websites and rate the quality of each of the websites. The raters can submit website quality ratings to the quality analysis server through the rating input device. The quality analysis server receives website quality ratings from the rating input devices and stores the website quality ratings in the signal store. The website quality ratings are associated with a uniform resource locator and other website signals corresponding to the rated website. The quality analysis server identifies relationships between the website quality ratings and website signals and creates a model representing the relationships, as described below.

Further, the quality analysis server searches the signal store for unrated websites (e.g., sites lacking a signal indicating a quality rating). The quality analysis server determines whether the unrated websites have website signals that are related to quality ratings and applies the model to the unrated websites. Application of the model results in a calculated quality rating. The quality analysis server assigns the calculated quality rating to the corresponding website. The calculated quality rating is an additional website signal that is stored in the signal store for use by other applications.”

“The search engine can use the website signals stored in the signal store to filter and/or order the search results based on the stored quality ratings of the websites. For example, a threshold can be used to filter websites that have a stored quality rating below the threshold. Additionally, the websites returned by the search device can be ordered according to their stored quality rating such that the websites having higher stored quality ratings are listed prior to websites having lower stored quality ratings.”

The patent also provides some examples of the sorts of things that could be used as quality factors:

“…quality factors can be used such as the correctness of the grammar and spelling of the text on the web pages, whether obscene or otherwise inappropriate material is presented, whether the websites have blank or incomplete pages, as well as other factors that would affect the quality of the website.”

You will note that these are some of the very things mentioned in the QRG. But, that is not the reason why the patent is so compelling – it is because it provides a very logical framework in which to develop methods for assessing websites’ and webpages’ quality, and for generating a Quality Score that can be used in ranking determinations. The methods involved indicate that relatively small sample sets of test pages could be used to create models that could work well across all other pages of similar types.

Imagine that you have identified a type of page that you would like to create a Quality Score for — such as an information article about a health topic. (This is a “Your Money or Your Life” type of page, or “YMYL” page as explained in the QRG.) Google could take signals they have mentioned before that they can measure, including: amount of content, page layout, number of ads and ad placement on the page (are they above the fold or interstitial — do they obscure or interfere with accessing the page’s main content), reviews about the site, reviews and links about the content creator if different from the site, links out from the page (perhaps indicating identity of content creator and/or citing info reference sources), links to the site and page (PageRank), clickthrough rates to the page from its prime keyword searches, correspondence of the visible headline/title at top of the page with the page’s content and search snippet title and meta description, and factors that indicate the site is on the up-and-up.

Through testing of numbers of similar types of pages, Google could have developed a model – a pattern – of the combination of quality signals and developed a score value associated with it. Essentially, when a health topic article page has a certain range of the criteria signals like PageRank, plus content layout and quantity of similar type, plus certain inlink PageRank, plus CTR, plus user reviews – then Google could apply the computed Quality Score to the page without having any human manually review it. It seems likely that Google has developed some such models for many different types of pages.

Image4

Image2 1

The reason to think that Google could be incorporating machine learning in this sequence is that it could come up with quality rating relationships in the data that would not be simple toggles that they manually set with different weightings. Instead of clumsily setting a weight for a particular signal (like, saying that a health topic article of a certain quality rating level should have minimum PageRank of X, and the minimum number of links, etc), with machine learning the system could identify more complex relationships (like, assign a certain quality score if PageRank is of N to NN, CTR is X to XX, paired with a specific page layout type).

The patent describes this possibility:

“In some implementations, the model can be derived from the website signals and the website quality ratings using a machine learning subsystem that implements support vector regression.”

Essentially, this produces a machine learning classifier. The model can then be used to identify all pages of a common class, calculate a quality score, and then apply the same or similar rating to other members of the same class of pages.

Whether Google inputs the data directly from the pages and their quality evaluators’ ratings of them into a machine learning system or not does not really matter so much – they could still use a number of “models” of types of pages and then check them against how the quality raters assess them and then manually tweak weightings. But, with the sort of massive firepower Google has at their disposal, I tend to think this does not make sense anymore. Indeed, quite some other SEO analysts are also concluding that Google is incorporating machine learning into content rankings (such as Eric Enge and Mark Traphagan), and not just in query interpretation which Google has publicly disclosed.

This really explains some of the vagueness that has come from Google when asked what to do to address ranking drops from the core algorithm changes. Support vector machines, or neural networks, will come up with scores very holistically, so pointing to any one signal or even a handful of signals as influencing the outcome most in any given instance probably would not really be possible — it would be buried in the complex interior of the models, which might be fairly abstracted.

Once the computed models are applied to search rankings, Google then has their evaluators rate the search results again. Doing this iteratively helps tweak the results to better and better over time.

The various possible signals they are likely incorporating into a Quality Score also have complexity:

  • PageRank – and/or, some evolved signal that may also involve the quality/trust of the links;
  • User Reviews Sentiment – Google’s QRG suggests that there would be some threshold number of reviews, so if a business/website had relatively few total reviews, they might opt to ignore the measure as too volatile/unrepresentative. Note that Google has long done research on sentiment analysis and has a patent or two for conducting it. They have also actually performed it for businesses and displayed it with local listings in the past. Now, they have somewhat waffled back-and-forth about whether they use it for rankings. But, they essentially declared that they would be incorporating sentiment, following the DecorMyEyes brouhaha. To put it conversely, if Google is incorporating reputation, how else could they adjust ranking if not by performing some type of sentiment analysis? Among all the somewhat nebulous quality assessments, the sentiment is relatively straightforward to analyze and use.
  • Mentions Sentiment – Do people mention your product in social media and emails to each other? Social media “buzz” is a popularity measure and the sentiment can be a quality measure as well.
  • Click-Through-Rate (CTR) – (and related bounce rate) there has been a lot of debate about this potential factor over time, but, much like Quality Score itself, if it is used by their ads’ Quality Score, then why not for organic rankings as well? In a 2012 freedom-of-information-act disclosure that was mistakenly made public by the Wall Street Journal, Google’s former chief of search quality, Udi Manber, is quoted saying:

“The ranking itself is affected by the click data. If we discover that, for a particular query, hypothetically, 80 percent of people click on Result No. 2 and only 10 percent click on Result No. 1, after a while we figure out, well, probably Result 2 is the one people want. So we’ll switch it.”

Some research has supported that CTR is indeed influential upon rankings and some has found that it is not. If it is just part of a website’s Quality Score, then the CTR is not a direct ranking factor and this could explain the discrepancies in research findings.

  • Percent / Intrusiveness of Ad Placements – How much of the total page space is comprised of ads? Google may have calculated a good/bad threshold. Also, how many ads, and do they break up the main content of the page too much? Are there interstitials or overlays that cannot be closed? Do ads follow you as you scroll the page? The degree of intrusiveness of ads could be calculated.
  • Sites Missing Identification Information – Sites ideally should have “About” and “Contact Us” pages. About sections should explain who the company is, and, even better, list top staff members along with photos. Contact Us page should ideally have as much contact info as possible, including street addresses, phone numbers and contact submission forms. It likely helps if staff pages link to employee’s social media and LinkedIn profiles and vice-versa. There is nothing worse than visiting a site and being unable to see who is behind it!
  • Critical E-Comm & Financial Site Factors – HTTPS is a given — but, I have seen that Google is also going further than just checking-off if you have HTTPS since not all encryption and configurations are the same — I have seen Google Chrome throw up warnings for deprecated/faulty SSL; E-Comm and financial info sites must also have clear Terms & Conditions, Privacy Policy and Customer Service contact info. Also, include Help sections and Return Policies, for good measure, if applicable. I have also highlighted using Trust Badges in the past — although Google does not use a Trust Badge itself as a quality factor most likely, it is still good for reassuring online consumers and can enhance sales by increasing trust.
  • Site Speed – this nearly goes without saying, but this was a factor in the PPC Quality Score way back at the beginning, and it is still a type of quality measure in organic search today.
  • Reviewed & Updated Regularly – The QRG cites some types of articles/information as needing to be reviewed and updated if necessary on a regular basis. Medical, legal, financial and tax advice are types of information that they mention. If your site contains articles of this sort, check it regularly and if changes are made, I recommend adding a visible “updated” date on the article page. Likewise, failure to update sitewide Copyright dates in footers is also a potential measure of quality, and if not updated might contribute to an evaluation that the site is abandoned or not up-to-date. Other aspects of this could include pages with malfunctioning widgets/apps; pages with broken images and broken links; and, pages that purport to list current information but have clearly outdated content.
  • No-Content & Worthless Content Pages – Google can see the percentages of pages encountered by their spider which are error pages, blank pages and pages with unsatisfactorily brief content. Sites with large numbers of errors indicate a lack of updating and poor maintenance. Even if one page on such a site has a great article, the chances increase that a user might click to another page on the site and encounter unsatisfying content.
  • Porn / Malicious / Objectionable / Deceptive Content – I very nearly did not bother mentioning this, but such types of content sharply increase the likelihood that the page and site will receive the lowest possible Quality Score. Want an instant decrease in your Quality Score? Add these things. Concerningly, a website is also responsible for such stuff appearing in the ad content, pulled in from other servers, so one must be a little vigilant in knowing what sorts of things are getting displayed on one’s site.

PageRank Score calculation methodology could be in use

One other interesting aspect of how Quality Score could work is that a page’s Quality Score could also incorporate the Quality Scores of the other pages linking to it or its overall domain. Quality Score could partly be computed by iterating over the entire link graph a number of times in computing the scores, using a method similar to the original PageRank algorithm. This would produce a more weighted scale of Quality Score values, helping to insure that the highest quality content is truly up at the top and utterly burying the lowest quality content to the point where searches virtually never encounter it in typical search interactions.

Observations on some sites affected by Medic Update

Barry Schwartz published a handful of some of the sites impacted in the Medic Update, and I did a cursory review of some and saw them lacking in much of the criteria that I outlined as potential quality factors and which the QRG specifically mentions:

  • Prevention.com – Had no About page. Infinite scrolling – could the algorithm see the site’s privacy and Terms & Conditions links? The Contact Us page was on a different domain. Reputation of article writers not necessarily expert on medical topics. Very large ads and side content badges that are grotesque and distracting.
  • Gcflearnfree.org – It redirects, cartoon images on the staff page, and no text, no phone/address. Contact form only appears after a human validation click.
  • Minted.com – Customer service contact phone number hard to locate and linked-to on a subdomain.
  • Buoyhealth.com – About page redirects to the homepage! No phone/address/personnel Also, weird logo graphic code.
  • VeryWellHealth.com – surprisingly, it is a participant in HONcode and has doctor writers – so, why did it get dinged? Award-winning. Journalistic integrity with editorial doctor board. Perhaps it needs a more prominent About link at the top? No phone/address for contact. Too many ads on article pages, perhaps, or the sort of according-opening ads at tops of pages may be too irritating to users. Also, a very weird logo linking system. Path-class image map possibly hard for typical machine interpretation of links.
  • GoodRX.com – About page says who they’re NOT, and reads more like a public service warning. Doesn’t tell who they ARE. Site does a whole lot of things right.
  • Spine-health.com – uses HONcode, which is good. Phone and address not present. Contact Us on different domain. Different name and link in the page footer.
  • NaturalFoodSeries.com – An almost infinite scrolling homepage? No About page, Contact Us page, no phone, no address, etc.

Some of these sites may have changed since I looked at them weeks ago, and some may have recovered rankings. The nearly cosmetic issues I noted about them are likely not all of the quality signals they are deficient with — simply adding an About page is unlikely to fix the overall combination of issues.

Mixed messages over ranking signals

Some of the potential signals I have cited here and in the past have been quite controversial, and I do not doubt that some will disagree over some of my assertions and conjectures herein. But, I think my explanations come close to explaining why there has been a disconnect over what some SEO professionals believe are ranking factors and what Googlers are claiming are not.

I think Googlers have sometimes played a game of semantics. They can say that presence of one or more of the controversial factors (CTR, sentiment analysis, etc.) are not direct ranking factors — because those factors may or may not roll up into a complex evaluation resulting in a Quality Score. For instance, if my conjectures on Quality Score are close to how it functions, then two different pages on the same topic could be very similarly relevant and both have a high CTR, but one’s Quality Score could be significantly higher due to a combination of other things. In yet other cases, a higher CTR could shift a page up into being classified at a higher rung of quality.

A “model”/pattern of a good-quality page sounds much like a “search recipe,” as Danny Sullivan referred to it. There are almost certainly different combos and weightings of factors used to determine Quality Scores of different types of pages.

Use of a support vector regression or a neural network would make the resultant Quality Scores nigh impossible to thoroughly reverse-engineer because it is holistic. The Quality Score is a gestalt. This is why that Googlers have been recommending vague-sounding changes for those affected, like “work on improving your overall quality” and “focus on making the best content possible.” Aside from technical SEO issues, one will rarely be able to change one or two things sitewide to address low Quality Scores with these changes. As Glenn Gabe has observed with sites affected by these changes, “Site owners shouldn’t just look for a single smoking gun, since there’s almost never just one.”

How to improve your site’s quality score

So, how does knowing that the Quality Score is now a complex gestalt help you to improve your rankings?

The good news is that most SEO guidance is all still very apt. Exercise generally good SEO “hygiene” in technical SEO construction, keeping error pages and no-content pages to a minimum. Eliminate unnecessary, thin content pages. Avoid prohibited link building practices.

SEO itself is a holistic practice – do all the things that are appropriate and reasonably feasible in optimizing your site. Ensure that you have the elements that reassure consumers that your site is on the up-and-up: About pages, Contact pages, Privacy Policy and Terms and Conditions pages, Customer Service contact options and update content as appropriate.

Beyond technical SEO stuff, you need to be somewhat obsessed with good usability, good user experience, good customer service and creating great content. You need highly critical people to assess your site and give you feedback on what may not be working well and how to fix it. You need to work to view the site as though you are a consumer of your site’s content and try to fulfill your site users’ needs thoroughly. You need to respond to negative reviews and try to elicit online reviews from your pleased users/customers. If your service frequently creates friction with clients, eliminate the points of conflict and improve your customer service practices. You need to produce the best content and get the best content creators when building your content.

Finally, practice proactive reputation management by working to positively engage with your community, both online and offline, consistently over time. Engage with social media and develop your presence via those channels. Provide good expert advice online for free as a means of reputation-building. Become a member of your industry’s professional groups and community business groups. Respond professionally and do not be provoked online.

These practices will generate the signals for a good Quality Score over time and you will reap the benefits.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Chris Silver Smith
Contributor
Chris Smith is President of Argent Media, and serves on advisory boards for Universal Business Listing and FindLaw. Follow him @si1very on Twitter and see more of his writing on reputation management on MarTech.

Get the must-read newsletter for search marketers.