German Parliament Hears Experts On Proposed Law To Limit Search Engines From Using News Content
Yesterday, the Judiciary Committee of the German Bundestag — Germany’s national parliament — held an expert hearing on a proposed “Leistungsschutzrecht” law for news publishers. The law, known as “ancillary copyright” in English, would require search engines and others — perhaps even Facebook, Twitter and individual bloggers — to pay news publishers if they link […]
Yesterday, the Judiciary Committee of the German Bundestag — Germany’s national parliament — held an expert hearing on a proposed “Leistungsschutzrecht” law for news publishers. The law, known as “ancillary copyright” in English, would require search engines and others — perhaps even Facebook, Twitter and individual bloggers — to pay news publishers if they link to or even briefly summarize news content.
The hearing didn’t result in a vote. It was the next step in a process that may lead to Leistungsschutzrecht becoming law or not. Below, some background on what happened at the hearing, along with some analysis from my perspective about the law.
Before I go further, an important note. I am not a lawyer. English is not my native tongue, and I am not a trained journalist. Please do not expect legal expertise, journalistic style or proper English. I do not speak on behalf of anyone else. This is my personal opinion. But I have I’ve been tracking “Leistungsschutzrecht für Presseverleger” since 2009, as part of my work for Wikimedia Deutschland, as we’ve tried to figure what impact it might have on us or others.
Background About Leistungsschutzrecht
Under the current copyright law in Germany, it is perfectly legal to operate a for-profit search engine that contains search results from German news publishers. It is legal to allow users to enter search terms and to display search results that contain a link to pages matching the search terms. It is also legal to add a snippet to this link and present a small piece of information that might help the user understand if this is a meaningful search result you might want to click on.
The draft Leistungsschutzrecht bill (known as “LSR”) was formally proposed by the German government coalition in August of 2012 and passed to the German parliament for action. That action will involve several readings and committee reviews, of which yesterday’s hearing was only one.
If this bill is enacted as-is, search engines would no longer be allowed to display snippets unless they had received permission first. This is very crucial aspect of this Leistungsschutzrecht. It is not meant as an opt-out tool that allows you to operate a search engine unless the publisher objects and has his web sites removed. Quite the opposite, it is up to the search engine operator to ask for permission first.
LSR would grant news publishers an exclusive right to “make public” news content for one year, though what exactly “news content” would be is unclear. Part of the proposed law defines this as essentially doing what the press “usually does,” explicitly mentioning informing, offering opinions and entertaining.
After the year exclusivity period had passed, the right would be transferrable. There are provisions to ensure authors or other right holders are not disadvantaged. Other provisions would extend rules specifically aimed at requiring search engines and “commercial providers of services who process content respectively” to seek permission for use.
The bill does not have requirements on payments. It is quite possible that in an LSR world, a news publisher grants permission to a search engine for free use. It is also possible that the press publisher pays the search engine in order to get included into a web site. In any case, the default switch is set to, and I am going to use a German term here you might remember from the TV show Hogan’s Heroes: VERBOTEN.
How Would It Work? Unclear
There are many unanswered questions and uncertainties about how LSR would work. For example, how does a search engine how whether a something is a news story? The answer is simple: it doesn’t.
A news story published by a journalist on his personal web site and meant as an example of his skills will not be covered by the LSR, whereas the same exact text on the web site of a publishing house will be covered by the LSR.
It is a matter of open debate whether blogs will qualify as press products. A safe approach for a search engine operator will be to assume that all content is press content until evidence to the contrary surfaces.
Next question: Why don’t publishers simply use robots.txt to exclude Google from listing them? The answer is, again, simply: A large share of revenue comes from showing ads to visitors who come in via Google and Google News. There is no economic point in shutting down one of your largest sources of income. Publishers want to have Google continue sending them customers and at the same time get paid for it.
Who doesn’t want to have the cake and eat it, too? Some publishers have tried to present robots.txt as a binary measure that is incapable of preventing the indexing of only parts of a web site or the selective exclusion of services like Google News or the ability to disable snippets. Their objections have been disproven by (LSR opponent and journalist) Stefan Niggemeier.
Yesterday’s Expert Hearing
Yesterday’s hearing was, as best we know, the only expert hearing that will be held as the proposed law is considered. It lasted for three hours. It allowed members of the Bundestag to ask questions to nine experts who were picked by the five parties in parliament (larger parties got to allocate more expert slots). These experts were allowed to submit written statements before the hearing (you can find them in German here).
Parts of the hearing were easily predictable. Members of the political opposition (which is against the proposed law) had picked scholars to speak against it. The governing parties had picked experts supporting the law. Questions from one party to their own experts were meant to create the opportunity for these experts to elaborate on their arguments in favor or against the Leistungsschutzrecht.
While it was an expert hearing, it wasn’t focused on technological expertise. Instead, it was more a scholarly or philosophical examination, of how such a law might fit into the current framework of constitutional law, European Union law the current dogmatic style of copyright.
Whenever a technical question was raised, the answers were vague, evading or simply hilarious.
A representative from the publishers’ associations in favor of the law kept on repeating a demand for technical language with a much more complex vocabulary to express conditions such as temporal, topical or size restrictions, payment requirements and other conditions.
So far, he has been unable to present a viable way how this could be implemented and he has failed to explain the obvious: the language he proposed would only constitute an invitation to negotiate on the terms. It would not substitute the negotiations itself, bringing us to the argument of a huge and crippling bureaucratic nightmare or opt-in-rights negotiations.
All experts in this hearing agreed that this law will create an era of legal uncertainty, and it will require a series of lawsuits to create enough case law to see who will actually be within the sights of the LSR. The time span mentioned was five years or more. Until then, any professional legal advisor would be required to warn his clients against innovating or investing in search engine projects in Germany because of such uncertainty.
Till Kreutzer, a legal expert and vocal LSR opponent who was among the nine invited experts was correct to point that out. He’s been working on this topic since it first emerged as a proposal by the publishers’ association in 2009. He and several other experts agreed during the hearing that the current LSR design will most definitely favor large corporations over smaller startups and smaller publishing houses and that the economic effects of this law will strengthen the position of both Google and companies like the Axel Springer publishing house.
Beyond Google: Twitter & Facebook?
At one point, parliament member Burkhard Lischka, of the opposition Social Democrats party, asked expert Christoph Keese of the Axel Springer publishing house if the proposed law would apply to news content that is shared on Twitter and Facebook.
Keese ducked the question. That’s probably because he, like most, doesn’t know. No one will for certain until a court case happens.
The Law Has No Clothes?
A key point during the hearing, to me, was when the parliament member and committee chair Siegfried Kauder, of the Christian Democrats party that’s part of the ruling government coalition, stated that after hearing from the experts, it seemed that the law was unlikely to actually produce new income for news publishers. Given this, he asked, why did it make sense to still pursue it?
Despite Kauder’s party being in favor of the law, he’s opposed to it but unable to actually stop it. His statement resulted in some laughter, perhaps because it was an “emperor has no clothes on” moment. So many seem to understand the absurdities in the proposed law that it’s almost humorous.
What happens next? There might be political pressure to have another expert hearing, this time with experts from the technical and commercial world. However, the chances are quite limited, and the governing coalition has shown little interest in dropping this endeavor.
In particular, there’s a general election of the Bundestag in September. If this law is going to have a good chance to pass, it has to reach the President’s desk well ahead of that date. The coalition government might try to speed up getting recommendations in favor of introducing such a Leistungsschutzrecht in order to have the second and final required readings by February. The very last possible date for a second and final reading is the last week of June.
In the meantime, all it took was a skilled programmer to come up with a tiny Chrome extension that will detect URIs in plain text and create snippets on the fly, thereby completely circumventing the LSR. You can test the proof of concept here have a look at the screenshot that compares a demo page without the plugin or with the plugin enabled.
It is almost certain that if this LSR becomes law, a huge amount of legal expertise and programming skills will be wasted to present workarounds to the LSR to restore what had been possible until then or come up with technical or legal countermeasures to stop these workarounds. One might think that there are enough actual things to do.
Editor’s Note From Danny Sullivan: My thanks to Mathias for contributing this report and his views. I couldn’t find any decent English-language coverage of yesterday’s hearing, so I’m glad he agreed to provide this contributed piece — and far better than I would have done trying to write something up in what I remember of my college German.
What’s most amazing about all this is that those pushing for this law inside and outside the government seem to have entirely forgotten that Europe has been here before, in the form of ACAP. That was a grand proposal — largely driven by news publishers in Europe — to figure out a way to curb what they saw as unfair use of their news content by search engines through a convoluted rights mechanism.
Despite the large amounts of time and money put into developing ACAP, it went nowhere, for various reasons. Chief among them was that it was such a complex system that tried to replace a fairly simple model that’s been used for almost two decades now — the robots.txt system. Robots.txt lets you be in a search engine if you want. If you don’t like it, don’t feel you get value from it, you can opt-out. It’s easy and effective.
We’ll be looking more at Leistungsschutzrecht in the near future, in particular trying to get some more details of how those in favor of it think it could actually work and would potentially be better than the system we have now.
Given today’s hearing, and what I’ve read of it over the past few months, I’ll be surprised if there’s anything more compelling than what ACAP was — and thus surprised even more so if it really goes forward as some unclear law with unclear technical standards.
For more on ACAP, see the first two articles below, with some related reading below that:
- ACAP Launches, Robots.txt 2.0 For Blocking Search Engines?
- Head-To-Head: ACAP Versus Robots.txt For Controlling Search Engines
- The New York Times Paywall Meters All Google Visits, Not Just Search Visits
- After Meeting With Eric Schmidt, France Stands By Threat To Write Law Forcing Google To Pay To Link To News Sites
- Google Wins Street View Reprieve In Germany But Confronts New Pro-Newspaper Copyright Restrictions
Opinions expressed in this article are those of the guest author and not necessarily Search Engine Land. Staff authors are listed here.