A proposed new European copyright law could make memes illegal and threaten the future of the internet as we know it.

On June 20, the European Parliament will set in motion a process that could force online platforms like Facebook, Reddit and even 4chan to censor their users' content before it ever gets online.

A proposed new European copyright law wants large websites to use "content recognition technologies" to scan for copyrighted videos, music, photos, text and code in a move that that could impact everyone from the open source software community to remixers, livestreamers and teenage meme creators.

In an open letter to the President of the European Parliament, some of the world's most prominent technologists warn that Article 13 of the proposed EU Copyright Directive "takes an unprecedented step towards the transformation of the Internet from an open platform for sharing and innovation, into a tool for the automated surveillance and control of its users."

The directive includes a great deal of useful legislation to update copyright law and better reflect modern technologies. But Article 13 is problematic.

PROPOSED EU COPYRIGHT DIRECTIVE

ARTICLE 13.1

Information society service providers that store and provide to the public access to large amounts of works or other subject-matter uploaded by their users shall, in cooperation with rightholders, take measures to ensure the functioning of agreements concluded with rightholders for the use of their works or other subject-matter or to prevent the availability on their services of works or other subject-matter identified by rightholders through the cooperation with the service providers. Those measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate. The service providers shall provide rightholders with adequate information on the functioning and the deployment of the measures, as well as, when relevant, adequate reporting on the recognition and use of the works and other subject-matter.

It's a direct threat to the established legal notion that individual users, rather than platforms, are responsible for the content they put online.

"Article 13 effectively deputizes social media and other Internet companies as copyright police, forcing them to implement a highly invasive surveillance infrastructure across their entire service offerings," says cryptographer and security specialist Bruce Schneier, one of the letter's signatories. "Aside from the harm from the provisions of Article 13, this infrastructure can be easily repurposed by government and corporations – and further entrenches ubiquitous surveillance into the fabric of the Internet."

Schneier and his fellow technologists, including figures responsible for the internet as we know it, like Tim Berners-Lee and Vint Cerf, are campaigning alongside the Electronic Frontier Foundation, Wikimedia and the Libraries and Archives Copyright Alliance, among many others.

The first Legislative Committee votes on the final form of the proposal on Wednesday, June 20. The version they vote through will be referred to the parliamentary plenary session, to be – almost certainly – voted into European law on the week of July 4 or, failing, after the European parliament returns from its summer recess in late September.

The Save Your Internet campaign is urging European internet users to contact their MEPs before the critical June 20 vote, and includes tools to facilitate communication with them via email, phone or social media.

Area of effect

Although it's primarily intended to prevent the online streaming of pirated music and video, the scope of Article 13 covers all and any copyrightable material, including images, audio, video, compiled software, code and the written word.

Internet memes— which most commonly take the form of viral images, endlessly copied, repeated and riffed on— could fall into a number of those categories, creating an improbable scenario in which one of the internet's most distinctive and commonplace forms of communication is banned.

https://wi-images.condecdn.net/image...8924161536.png

The definitions used in Article 13 are broad by design, says writer and digital rights activist Cory Doctorow: "This system treats restrictions on free expression as the unfortunate but unavoidable collateral damage of protecting copyright. Automated systems just can't distinguish between commentary, criticism, and parody and mere copying, nor could the platforms employ a workforce big enough to adjudicate each case to see if a match to a copyrighted work falls within one of copyright's limitations and exceptions."

Meme makers don't have the kind of organised front of code-sharing platforms or the Wikimedia Foundation, but there've been a few, albeit rather muted efforts to raise a fuss among meme-making groups on Reddit, Facebook and 4chan, with leftist meme creators in particular expressing concerns that the new law "will result in blanket meme bans because they can't keep up with actually checking against parody laws".

A redditor from r/dankmemes has passionately proclaimed that "you can take our internet and our rights, but you can never take our memes." And it gets weirder the further right you go, as conspiracy theories proliferate. One denizen of 4chan's /pol/ went so far as to suggest that attempts to muster support against Article 13's content platform filtering are "a pro-Article 13 psyop meant to make the opposition look uncool", while other comment threads on 4chan and Breitbart focussed on the always fertile alt-right tactics of blaming the Jews, female MEPs, and hedge fund magnate George Soros.

The technology to filter out memes —or for that matter any copyrighted materia— would require a significant investment of time and money to develop. This means that we could see the detection of copyrighted material outsourced to companies with the means to carry it out effectively— that's likely to be US internet giants such as Amazon and Google.

A filtering system would be very likely to see European users' posts analysed by US firms, which could expose their data to the US's far less stringent privacy controls, despite the EU-US Privacy Shield framework for data protection.

The Max Planck Institute for Innovation and Competition's formal response to Article 13 also notes that this kind of automatic filtering is in breach of both the European Charter of Fundamental Rights and Article 15 of the E-Commerce Directive, which prohibits Member States from "imposing on providers that enjoy the protection of a safe harbour, general obligations to monitor the information which they transmit or store, as well as general obligations actively to seek facts or circumstances indicating illegal activity."

Article 13's statements that it concerns sites that provide "large amounts of works" and that "measures, such as the use of effective content recognition technologies, shall be appropriate and proportionate" may give leeway to smaller platforms to avoid intensive copyright filtering, while some proposed alternative versions of the article even omit reference to content recognition.

However, that's by no means certain, and the additional burden of policing copyright, Doctorow says, could stifle the the development of new platforms and technologies within the EU for years to come.

Detection tech


Google, Facebook and Amazon have advanced image recognition algorithms based on machine learning. TinEye uses hashes as unique signatures to identify specific images in whole or in part and Google uses a similar technique to spot specific screener copies of movies if they're uploaded to Drive.

Text checking is more straightforward. Plagiarism detection services for the written word are provided by companies such as Grammarly and Plagiarism Checker X, although they're limited in scope by the available content for them to check against, while CopyLeaks allows copyright holders to see if their work has been plagiarised online.

YouTube's Content ID system detects copyrighted materials by matching its users' uploaded videos with audio tracks that copyright holders can upload via the platform's content verification program portal.

However, false and inaccurate copyright claims are a frequent occurrence, while a great deal of copyrighted material goes onto the platform unnoticed, either due to clever evasion tactics such as re-editing content or because the copyright holder hasn't been in a position to upload a reference file.

"These systems are wildly imperfect and will not merely catch matches for copyrighted works, but also false positives," Doctorow says. "Big media companies – with the ear of the big platforms – will be able to pick up the phone and have someone unblock a piece of media that was falsely flagged, but the rest of us will be stuck firing off an email and crossing our fingers.

https://wi-images.condecdn.net/image...8925229426.png

Even now, firms accidentally claim copyright over works they don't own. Doctorow highlights the example of US news programmes broadcasting public domain government footage such as Nasa launches and Congressional debates, include them in their newscasts and then claim the newscasts on YouTube. Then, he says, "when NASA or C-Span or whomever tries to upload their footage, they're blocked because a newscaster has sloppily filed a false copyright claim."

With no ability to identify context, automated copyright flagging systems are also likely to remove important content because of the incidental appearance of copyrighted material in the background, ignoring principles of fair dealing enshrined in the copyright laws of many EU countries: "something like having your protest footage blocked because of a passing motorist whose car radio was blaring a pop song – it is a match, but not one that infringes copyright."

Doctorow highlighted the potential for unanticipated abuse of any automated copyright filtering system to make false copyright claims, engage in targeted harassment and even silence public discourse at sensitive times.

"Because the directive does not provide penalties for abuse – and because rightsholders will not tolerate delays between claiming copyright over a work and suppressing its public display – it will be trivial to claim copyright over key works at key moments or use bots to claim copyrights on whole corpuses.

The nature of automated systems, particularly if powerful rightsholders insist that they default to initially blocking potentially copyrighted material and then releasing it if a complaint is made, would make it easy for griefers to use copyright claims over, for example, relevant Wikipedia articles on the eve of a Greek debt-default referendum or, more generally, public domain content such as the entirety of Wikipedia or the complete works of Shakespeare.

"Making these claims will be MUCH easier than sorting them out – bots can use cloud providers all over the world to file claims, while companies like Automattic (Wordpress) or Twitter, or even projects like Wikipedia, would have to marshall vast armies to sort through the claims and remove the bad ones – and if they get it wrong and remove a legit copyright claim, they face unbelievable copyright liability."