Demanding More Detail, Legal Group Calls On Google To Disclose RTBF Criteria

The group of 80 lawyers and academics want a lot more information from Google.

Chat with SearchBot

rtbf-right-forgotten-erase-ss-1920

At a conference in Berlin Google’s global privacy counsel Peter Fleischer offered a window into Google’s “right to be forgotten” (RTBF) decision-making process:

The requests . . . first go to a large team of lawyers, paralegals and engineers who decide the easy cases . . . Google has dozens of people working on the requests, mostly out of the company’s European headquarters in Dublin, a Google spokesman said . . .

The harder ones get bumped up to the senior Google panel. Like many Google meetings, some participants are in a conference room, while others join remotely through the company’s Hangouts video-chat product, a spokesman said. Sometimes the group calls in outside experts, such as lawyers with particular specialties.

Fleischer added that following the discussion of each case the assembled group votes. It’s important to point out that individuals whose RTBF requests are denied can appeal to their local data protection authorities for recourse. We don’t have any data however on how many of those appeals are granted over Google’s denials.

Now a group of self-declared “internet scholars” is calling on Google to disclose much more information and detail about its RTBF analysis and decision-making. In an “open letter,” the group of 80 signatories have asked “at a minimum” for the following pieces of information:

  1. Categories of RTBF requests/requesters that are excluded or presumptively excluded (e.g., alleged defamation, public figures) and how those categories are defined and assessed.
  2. Categories of RTBF requests/requesters that are accepted or presumptively accepted (e.g., health information, address or telephone number, intimate information, information older than a certain time) and how those categories are defined and assessed.
  3. Proportion of requests and successful delistings (in each case by % of requests and URLs) that concern categories including (taken from Google anecdotes): (a) victims of crime or tragedy; (b) health information; (c) address or telephone number; (d) intimate information or photos; (e) people incidentally mentioned in a news story; (f) information about subjects who are minors; (g) accusations for which the claimant was subsequently exonerated, acquitted, or not charged; and (h) political opinions no longer held.
  4. Breakdown of overall requests (by % of requests and URLs, each according to nation of origin) according to the WP29 Guidelines categories. To the extent that Google uses different categories, such as past crimes or sex life, a breakdown by those categories. Where requests fall into multiple categories, that complexity too can be reflected in the data.
  5. Reasons for denial of delisting (by % of requests and URLs, each according to nation of origin). Where a decision rests on multiple grounds, that complexity too can be reflected in the data.
  6. Reasons for grant of delisting (by % of requests and URLs, each according to nation of origin). As above, multi-factored decisions can be reflected in the data.
  7. Categories of public figures denied delisting (e.g., public official, entertainer), including whether a Wikipedia presence is being used as a general proxy for status as a public figure.
  8. Source (e.g., professional media, social media, official public records) of material for delisted URLs by % and nation of origin (with top 5-10 sources of URLs in each category).
  9. Proportion of overall requests and successful delistings (each by % of requests and URLs, and with respect to both, according to nation of origin) concerning information first made available by the requestor (and, if so, (a) whether the information was posted directly by the requestor or by a third party, and (b) whether it is still within the requestor’s control, such as on his/her own Facebook page).
  10. Proportion of requests (by % of requests and URLs) where the information is targeted to the requester’s own geographic location (e.g., a Spanish newspaper reporting on a Spanish person about a Spanish auction).
  11. Proportion of searches for delisted pages that actually involve the requester’s name (perhaps in the form of % of delisted URLs that garnered certain threshold percentages of traffic from name searches).
  12. Proportion of delistings (by % of requests and URLs, each according to nation of origin) for which the original publisher or the relevant data protection authority participated in the decision.
  13. Specification of (a) types of webmasters that are not notified by default (e.g., malicious porn sites); (b) proportion of delistings (by % of requests and URLs) where the webmaster additionally removes information or applies robots.txt at source; and (c) proportion of delistings (by % of requests and URLs) where the webmaster lodges an objection.

While I assume that Google is using a consistent process and set of criteria, it would be a positive thing for Google to publish its guidelines, though not necessarily to this level of specificity in every case, so that the public and governments better understood the process.

However there are always going to be difficult and close cases that require “judgment calls.” That’s why there are appeals; Google may not get it right every time. Reputation VIP has estimated that 70 percent of RTBF requests are now being denied by Google.

Separately EU data protection authorities previously issued their own criteria that they want used in determining whether to grant or deny an RTBF request:

  1. Does the search result relate to a natural person – i.e. an individual? And does the search result come up against a search on the data subject’s name?
  2. Does the data subject play a role in public life? Is the data subject a public figure?
  3. Is the data subject a minor?
  4. Is the data accurate?
  5. Is the data relevant and not excessive?
  6. Is the information sensitive within the meaning of Article 8 of the Directive 95/46/EC?
  7. Is the data up to date? Is the data being made available for longer than is necessary for the purpose of the processing?
  8. Is the data processing causing prejudice to the data subject? Does the data have a disproportionately negative privacy impact on the data subject?
  9. Does the search result link to information that puts the data subject at risk?
  10. In what context was the information published?
  11. Was the original content published in the context of journalistic purposes?
  12. Does the publisher of the data have a legal power – or a legal obligation– to make the personal data publicly available?
  13. Does the data relate to a criminal offence?

Now that Google has taken on a quasi-judicial function in determining whether to grant or deny RTBF requests, we’re sure to see more of these kinds of procedural demands from those surrounding the process.

It might have been better to allow an EU-wide committee to manage the RTBF submission and decision-making process and let Google participate rather than deferring entirely to Google — yielding this kind of grousing from the sidelines. Then either Google or the involved individuals could appeal to an arbitrator if the outcome was opposed.


Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.


About the author

Greg Sterling
Contributor
Greg Sterling is a Contributing Editor to Search Engine Land, a member of the programming team for SMX events and the VP, Market Insights at Uberall.

Get the must-read newsletter for search marketers.