If Future Employees Compromise Privacy, Don’t Expect Google To Tell The World
Earlier this week, news broke that a Google employee had been fired in July for accessing the private information of a few individuals without permission, in violation of Google’s policies. Isn’t that something Google should be reporting itself, I wondered? It makes sense to me, but don’t expect Google to do so.
Google already reports aggregate information about governmental data requests it receives, something it began in April. It was intended to support transparency and shine light on what governments might seek from the company. From the company’s blog post:
We hope this tool will shine some light on the scale and scope of government requests for censorship and data around the globe. We also hope that this is just the first step toward increased transparency about these actions across the technology and communications industries
The message in that to me is clear. If you’re worried Google has data that might fall into government hands, here’s a guide to the governments you ought to pressure.
But Google’s data isn’t just vulnerable to outside forces. The July firing, which Gawker discovered this week, highlights that Google’s data is also subject to internal leaks and abuse. In fact, Google’s told TechCrunch that this is the second time something like this has happened, where an employee was fired for accessing private data in violation of company guidelines.
Two incidents is practically nothing. If anything, it highlights that Google has a pretty good track record of not having internal leaks.
But then again, why not let Google users know directly any time something like this happens in the future, in the spirit of transparency and openness that Google preaches? It doesn’t have to name the employee, individuals or get into specific details. But why not expand the existing government requests page into a data spillage page?
Any time personal data is accessed from Google, without the permission of the user, log it.
If a government puts in a request, log it.
If Google discovers a security break-in as with the China situation earlier this month, log it.
If a civil court case demands information though a subpoena, log it.
An employee does something wrong with private data, log it.
It seems easy for Google to directly report that an employee accessed data for a certain number of people, that those people have been contacted and Google feels the situation was taken care of.
Yes, there would be the inevitable negative news story, if this happens again. But what’s better for Google, to break that news itself or for it to come out months later through a leak to a third party?
If Google isn’t going to tell everyone when a data leak happens, then it can expect to be asked constantly about this going forward. “So, any employees violate privacy policies this month? Just checking….” And it also turns Gawker into required reading for concerned Google users, in my books.
Google’s official statement on the situation is as follows:
We dismissed David Barksdale for breaking Google’s strict internal privacy policies. We carefully control the number of employees who have access to our systems, and we regularly upgrade our security controls–for example, we are significantly increasing the amount of time we spend auditing our logs to ensure those controls are effective. That said, a limited number of people will always need to access these systems if we are to operate them properly–which is why we take any breach so seriously.
Officially, Google had no response about my idea of logging future privacy violations by employees, if they should happen. However, my unofficial understanding from Google about that idea is that it sees these as so rare and isolated in terms of the users involved that there’s no need do so, especially given that it doesn’t see other companies doing the same – and that internal personnel matters are involved.
Gawker’s back with a follow-up piece of unanswered questions, by the way. I can shed some light on some of those.
Was everyone notified who had their privacy violated?
Yes, that’s my understanding. In fact, that’s one reason Google, to my understanding, seemed to think my idea that they should have told the world about the leak made little sense. All the parties directly involved had been told.
My feeling is that the leak itself goes beyond the actual people involved. It goes to the trust of Google’s internal systems, period. The leak indicates a system failure, and even when other failures impact only a few people, Google often tells the world.
Heck, when Google discovered that its systems were being attacked in part to get private data from some Chinese human rights activists, it threatened to leave the country entirely. But it can’t tell us if an employee reads some private data?
What will prevent this in the future?
I had a chuckle about Gawker publisher Nick Denton suggesting there should be a “double key” required to access data. I had the same thought, almost picturing things like War Games and nuclear missle silos where both launch keys have to be turned at the same time.
My understanding is that Google apparently thinks this would be difficult. I also wondered if there’s a way for Google to keep data encrypted, so that its machines could read and process stuff as needed — know the key so to speak — but employees themselves couldn’t decrypt for human viewing. Apparently, to my understanding, this is potentially unworkable or might make things incredibly slow.
In the end, I don’t buy into this being some type of internal matter that was dealt with, or which can’t be discussed in further detail, or that if it happens again, wouldn’t be something Google could tell the world.
(Some images used under license from Shutterstock.com.)
Discover what's up in the business of marketing each Friday.