Are Search Engines Responsible For Reputation? Yes, Virginia, They Are. Big Time!

Columnist Chris Silver Smith discusses the legal issues surrounding online defamation, including search engines' role in helping (or harming) victims.

Chat with SearchBot

law-legal-book-ss-1920

If your reputation hasn’t been damaged in some way online, then you may never have thought about whether search engines could be responsible. After all, they seem pretty neutral in many ways — other people create Web pages, images, videos and social media posts, and search engines merely display the content for associated keyword searches.

But there’s a lot more going on than meets the eye. Google, Bing and other search engines could bear part of the responsibility if those search results are harming you.

First, if someone intentionally creates and publishes material specifically intended to lie about or misrepresent you, and thus damages your reputation — whether it be your business or you personally — then it’s obvious that this person is directly responsible. Under generally accepted business laws in the United States and Europe, that person may be considered liable for the damage they’ve done, and they might be compelled to assist you in addressing the situation as best as possible.

But, once you involve the internet in this situation, it becomes highly complex very quickly, and other parties may also be found to be partially responsible.

Legally, Search Engines May Not Be Liable … Yet

Big businesses have unquestionably been favored by U.S. laws over the past 30 years. (Deregulation of the telecom industry is but one major example.)

Internet commerce has also benefited from the pro-business political environment. One key legal decision that affected online reputation issues was a provision of the Communications Decency Act (CDA), Section 230, which legally bullet-proofed many types of online services for the content presented/distributed through them, so long as the information is provided by third parties.

This means that search engines are not generally considered liable for material presented, since it’s culled from Web pages they index. Search engines, forums, blogging platforms, social media platforms, content sharing services — none of these are legally responsible when their material originated from other parties. Contrast that with print newspapers, traditional television programs, print books, real world billboards, etc., all of which are legally responsible for content published within them.

In some ways, the Communications Decency Act exclusion for online is extraordinary — it seems a suspension of what has long been held in common law and actual tort law regarding entities that may be liable for defamation. On the other hand, “distributors” such as libraries, bookstores, newspaper stands — those that distribute published materials — are not considered responsible for defamatory stuff published and made available through them. So, the CDA essentially declared all sorts of online services to be mere distributors in this new media world.

I personally have mixed feelings about this. Certainly, the CDA facilitated much less restricted growth of the internet, which has been beneficial to its development and to the overall economy. However, many other aspects of internet business are also responsible for growth of the medium: low cost-of-entry compared with offline business models, low-to-no taxation in many cases, ease of access in nationwide and worldwide marketplaces, and more.

So while the CDA likely reduced costs and legal fears for online businesses, it’s not solely responsible for the success of the internet. Further, the CDA’s exclusion of legal responsibility resulted in an extremely large hole where online defamation has prospered. It created a gaping void where businesses and individuals could not halt and undo illegal defamation, too.

Search Engines Don’t Exercise Editorial Control… Or Do They?

A key aspect of the CDA was this concept: If you don’t apply editorial review and controls on content passing through your online service, then you’re not likely to be held responsible for it. On the surface, this would seem to completely eliminate responsibility for search engines, correct?

However, search engines do exercise forms of editorial control. Their algorithms decide which content will be promoted up to the surface, where the general public is most likely to be exposed to it.

When one searches for a keyword phrase, such as “John Smith,” the search engines frequently display a page of search results of around 10 entries — out of potentially many millions of pages that match the search phrase to some degree. This act, in and of itself, is a type of editorial control — and, as it turns out, it can be quite “subjective,” considering some of the factors that search engines have chosen to honor with higher associated rankings.

For example, while writing this article, I performed a Google search for “Coca-Cola,” arguably the most recognizable brand name in the world. One might assume that as a long-established and extremely strong brand name, the Coca-Cola Co.’s own materials would be likely to dominate the search results. Indeed, this is largely the case — at a glance, it appears to me that Coca-Cola directly or indirectly controls probably 75% of the content on the first page of search results.

However, there are a number of items showing on page one that are not positive and originate from other, non-corporate sources, particularly news and financial analysis sites. This is the result of Google’s decision to surface a variety of sources and content types in its search results pages. Their engineers decided to feature content that has strong recency associated with it (such as news articles), content that attracts more attention than the better-established corporate content, and content that comes from alternative sources to Coca-Cola Co.

In many different publications and conferences, Google engineers have talked about providing users with a varied mix of content, and their search results reflect this. They have long suppressed “duplicate content” from appearing on the same page of search results, stating that they consider it to provide a poor user experience.

Google also introduced “Universal Search,” its term for the “blended” search results page specifically intended to prominently feature more types of content than might otherwise appear among the typical ranked Web pages. Depending upon the searched keyword and personalization factors, the search results page may feature images, videos, map listings, news items and more.

It can be noted that many of these types of content already had Web pages devoted to them, and those pages were already appearing in the search index; however, Universal Search specifically conveyed greater ranking power to these types of content so that they could appear on page one, and Google simultaneously varied the display of these listings to differentiate them from “regular” text Web page content.

The search engines have also performed various types of policing of the internet, and in many cases, they have chosen to use their heavy influence to push webmasters into changing their website designs and technologies.  They’ve done this through applying various editorial biases to their ranking algorithms. For instance, they will frequently flag or suspend sites from appearing in search results when their algorithms determine the sites may have been compromised by spam or malware.

I’m not at all saying this is a bad thing, but it illustrates that this is not at all a “neutral” inclusion and display of Web pages. Search engines are applying editorial evaluation and inclusion in search results when it suits them.

This editorial control goes yet further. Google in particular has announced algorithmic ranking penalties/bonuses for websites based on site speed or “mobile-friendliness.” It has had the capability to decide that “fast” or “mobile-friendly” equals good, and “slow” or “non-mobile-friendly” equals bad. Google has effectively extorted webmasters into playing by their rules, lest they risk exclusion from search results (and, in essence, the search marketplace at large).

These types of litmus-test editorial policies on the part of search engines go beyond merely enforcing rules that safeguard the functioning of their algorithms; they’ve extended their reach into trying to make businesses and webmasters comply with what they think people should do and how they want the internet to develop.

Statements from prominent search engine representatives further expose an internal philosophy that they will editorially intervene wherever they wish to do so. For instance, Matt Cutts, Google’s head of its spam policing unit, has implied that infographics based on dubious facts might be something that Google would take a dim view of, ranking-wise, despite that content having otherwise relatively superior quantities of popularity signals.

Are there instances where Google desires to be the authority on the truth of content it’ll allow? In our highly politically charged country, this would seem to be a near impossibility, but it has given the appearance of political biases sometimes.

If you asked one of my past client companies, they’d unequivocally state that Google has applied an arbitrary litmus test in regard to their PPC ad campaigns at times. I recall that their sales for ammunition were once abruptly suspended prior to Christmas, despite the fact that the sales were legal and even allowed under Google’s published policy. Perhaps they were suspended due to a prevailing mindset of the Californians minding the AdWords administration at Google headquarters.

Could Search Engines Be Purposefully Promoting Negative Content?

Much more significantly, I’ve begun to suspect that Google could be incorporating some types of sentiment analysis in determining what may appear on page one of search results. If you think about Google’s clear historical desire to provide a variety of content on the first page, it’s not inconceivable to imagine that it may have decided to purposefully feature a mixture of positive/neutral/negative content on page one.

It has the capability to do this: Google has multiple patents and research papers involving methods for performing sentiment analysis of content. (See patents for: “Domain-Specific Sentiment Classification” and also “Large-scale sentiment Analysis” — at least two of the inventors of which appear to now be working for Google. See research papers: “Sentiment Summarization: Evaluating and Learning User Preferences,” “A Joint Model of Text and Aspect Ratings for Sentiment Summarization,” and “Comparative Experiments on Sentiment Classification for Online Product Reviews.”)

Google doesn’t necessarily arbitrarily decide to mix the content up on the first pages of search results. According to many of its statements, it’s developed ranking and display features based in large part upon what people desire to find in search results.

An unfortunate aspect of the information superhighway is that, much like on a literal highway, humans will rubberneck out of curiosity when there are wrecks. We’re more inclined to click on scandalous-sounding or negative headlines. By their very nature, content titles containing words like “Scam,” “Lawsuit,” “Scandal,” — or, in the case of individuals, “Sex Tape,” “Arrest,” “Mug Shot” or “Nude” — will draw clicks with magnetic consistency.

This actually provides an advantage for negative content when you factor in that click-throughs from search results listings are a popularity/prominence signal, meaning that the click-through rate is something of an indirect ranking signal. So regardless of whether the underlying mechanics may incorporate sentiment analysis, Google has determined this is how its rankings will function.

There are also a few incredibly gray areas where the search engines would appear to be much more directly responsible for content shown. For example, when you begin typing a query into a search box, Google’s Autocomplete and Bing’s Autosuggest both display a number of related search terms that attempt to facilitate what you may be looking for.

coca-cola-autocomplete

From context and testing, we may know that this functionality is based in large part upon the queries that users submit to the search engines, in addition to content related to the search keywords and associated click-throughs to results. In fact, Google even states:

Autocomplete predictions are automatically generated by an algorithm without any human involvement. The algorithm is based on a number of objective factors, including how often others have searched for a word.

The algorithm is designed to reflect the range of info on the web. So just like the web, the search terms you see might seem strange or surprising.

However, the result is that Autocomplete sometimes defames businesses and individuals by including suggestions with a negative sentiment.

While one could make the argument that the background data is yet again from third-party sources, the display of it makes its provenance unclear to the end user, and it begins to look more like a sort of endorsement from the search engines. Arguably, the content displayed could be deemed to be highly prejudicial on the part of the search engines, because it may promote negative search terms to users who previously had no intention of submitting negative keyword combinations.

In combination with the click-through influence on rankings I outlined above, the Autocomplete/Autosuggest search terms can further bias rankings in favor of negative content. I’ve seen cases where fabricated defamation about my clients gets hardwired into the auto-promoted search terms, making it appear the search engine itself is making a judgment about my client.

When discussing Google’s search engine with regulators all over the world, Google has long maintained that the rankings are neutrally established based upon objective ranking algorithms functioning independently of human bias (although there are repeated instances of “manual” penalizations for sites that contravene its rules, as well as signals incorporated from its human evaluators staff that factor into ranking ability). It’s also maintained that even if it’s not always neutral, it’s then a matter of opinion on its part as to what should rank.

Either way, Google maintains that it should not be held legally responsible for the content displayed.

Search Engines Created The Marketplace, So They Are Responsible For It

However, the big issue in this is that search engines have developed to the point where they are the marketplace itself. In the virtual world, your company effectively doesn’t exist if it cannot be found in that search engine marketplace. The inverse of this is that how you’re represented when you do appear in that dominant virtual marketplace is how you’ll be perceived in large part.

The search engines have become a representation to the internet itself — they are the gateway for locating all items on the internet. Since the dominant search engines had major roles in creating this medium, they should be expected to bear a much greater responsibility for how it affects individuals and companies.

Search Engines Implicitly Acknowledge Their Responsibility

In fact, Google and Bing seem to implicitly acknowledge that they bear some responsibility by their actions over time. While the CDA seems to legally absolve them from responsibility for the content they display, Google and Bing have sometimes changed their algorithms, added processes, and manually intervened to address reputation issues.

Google and Bing both block some sexually related and offensive terms from Autocomplete/Autosuggest, for instance. After introducing the feature, Google has even taken this reputation-assistance adjustment further by blocking words like “Scam” when combined with a business’s name, as well as other prejudicial, negative terms. I can tell that it also extended this suppression functionality to a number of terms when combined with common proper names for individuals.

Likewise, Google suppresses these negative term combinations for the related search links it provides on many search results pages.

Google has taken more extensive steps to assist individuals’ reputations as well. In 2013, it moved to suppress rankings of mugshot websites, apparently responding to criticisms about how those sites are thinly veiled forms of extortion. More recently, Google announced that it would begin facilitating removal of porn revenge from search results when individuals request it without requiring any legal documentation of defamation.

Search Engines Actually Have Helped Defamation Victims

The biggest area where the search engines have implicitly acknowledged some level of responsibility for defamatory content appearing in results involves requests to remove defamatory materials. Both search engines have allowed companies, individuals and their attorneys to submit removal requests in instances where search results show listings to pages containing libelous content, provided there is a court document establishing that it is indeed defamatory or false.

Apparently, Bing suspended processing of defamation removal requests around two years ago, but it previously did cooperate with those requests and even sporadically approved a few removals up until a year ago. Google continues to remove defamatory content. As the underdog in the search market share wars, Bing may be trying to benefit from having a little less scrutiny in this matter, since a larger percentage of searchers are focusing on Google.

Google has gone to great lengths to highlight the expense of processing these requests (periodically announcing how many thousands of removal requests it’s received under Europe’s “Right To Be Forgotten” law, for instance). Similarly, in conversations with Microsoft Bing representatives, they’ve also related to me that it was an undesirable cost to their corporation.

In the U.S., search engines are not legally required to remove defamatory content — so why did Bing used to do it, and why does Google still do it when it clearly presents a substantial cost for these companies?

In many cases, there’s simply no other recourse available to defamed individuals and companies. If a website hosting company is offshore, then there’s no legal way for victims to compel removal at the source. It’s also sometimes not possible to trace back and find out who posted something, making it harder to sue the responsible individual to obtain a court order against them.

As I noted at the beginning of this article, the situation has left a lacuna where defamation can live on the internet, unimpeded. Our traditional laws intended for there to be some levels of recourse where one could halt damaging, illegal and false materials, but the information age has stripped us of some of that legal protection. As such, there’s a huge pressure on the system to address the wrongs committed against all these victims.

I believe that Google has taken the very savvy move of being nicer than it’s legally required to be in order to relieve some of the pressure on the system. It’s not legally required to remove (or reduce the visibility of) defamatory material — but it has done so despite it being a cost with no associated profit. I think it has taken the strategic stance of going further than legally required in order to reduce the risk that U.S. laws could potentially be changed to force it to do more, similar to the Right To Be Forgotten battle that it lost in the EU.

Summary

You may not agree with my opinions in this article about just how much responsibility the search engines should have where online defamation and reputation attacks occur. My belief is that there must be recourse for individuals to prevent their having their livelihood utterly and undeservedly destroyed by malicious liars and haters.

The largest online area of exposure is usually within search engines, when names are searched upon, so getting damaging stuff fixed there is the easiest way to solve the issue for victims. Since the search engines bear some responsibility for creating the very medium involved in this, I believe they incur responsibility along with the profits.

Their own actions also implicitly acknowledge some levels of responsibility. If you disagree with the issue of how much business should be held responsible in this, ask yourself: How would you feel if you, your wife, your son or your daughter were attacked online — and, consequently, damaging content appeared in the search results each time someone looked them up? I see the victims of this every day — people falsely accused of being rapists, pedophiles, criminal collaborators, negligent, sluts, pimps and a lot more. How would you feel if these sorts of labels were stuck to you or your loved ones, and you couldn’t get it off?

While the CDA gave a free pass to search engines, I wonder how long it would last if some legislators’ children experienced these sorts of online reputation issues.

Interested in reading more on online reputation management? Check out my other articles on the topic:


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Chris Silver Smith
Contributor
Chris Smith is President of Argent Media, and serves on advisory boards for Universal Business Listing and FindLaw. Follow him @si1very on Twitter and see more of his writing on reputation management on MarTech.

Get the must-read newsletter for search marketers.