The European Internet Services Providers Association (EuroISPA) has warned that the European Commission’s new plan for tackling “illegal” online content risks forcing internet providers to adopt aggressive filtering and monitoring technologies, which could end up removing legitimate content.

The EC has today proposed a range of new “guidelines and principles” that aim to increase the “proactive prevention, detection and removal” of “illegal” content inciting hatred, violence and terrorism online. Few could argue with the noble intentions of this position and anything that rids the online world of such horrific content should, on the surface, be considered a good thing.

In response the EC has proposed common tools to “swiftly and proactively detect, remove and prevent” the reappearance of such content.

The Proposals

* Detection and notification:
Online platforms should cooperate more closely with competent national authorities, by appointing points of contact to ensure they can be contacted rapidly to remove illegal content. To speed up detection, online platforms are encouraged to work closely with trusted flaggers, i.e. specialised entities with expert knowledge on what constitutes illegal content. Additionally, they should establish easily accessible mechanisms to allow users to flag illegal content and to invest in automatic detection technologies.

* Effective removal:
Illegal content should be removed as fast as possible, and can be subject to specific timeframes, where serious harm is at stake, for instance in cases of incitement to terrorist acts. The issue of fixed timeframes will be further analysed by the Commission. Platforms should clearly explain to their users their content policy and issue transparency reports detailing the number and types of notices received. Internet companies should also introduce safeguards to prevent the risk of over-removal.

* Prevention of re-appearance:
Platforms should take measures to dissuade users from repeatedly uploading illegal content. The Commission strongly encourages the further use and development of automatic tools to prevent the re-appearance of previously removed content.

Under this plan the EC expects internet providers to proactively implement their guidelines, which is something that will be monitored over the next few months. However if the commission decides that not enough is being done then they’ve also threatened to impose legislation to force the “swift and proactive detection and removal of illegal content online” (this work will be completed by May 2018).

Andrus Ansip, VP for the Digital Single Market, said:

“We are providing a sound EU answer to the challenge of illegal content online. We make it easier for platforms to fulfil their duty, in close cooperation with law enforcement and civil society. Our guidance includes safeguards to avoid over-removal and ensure transparency and the protection of fundamental rights such as freedom of speech.”

Vera Jourová, Commissioner for Justice, added:

“The rule of law applies online just as much as offline. We cannot accept a digital Wild West, and we must act. The code of conduct I agreed with Facebook, Twitter, Google and Microsoft shows that a self-regulatory approach can serve as a good example and can lead to results. However, if the tech companies don’t deliver, we will do it.”

The trouble with all this is that commercial broadband ISPs and online content providers make for a lousy internet police force. Content providers in particular face the same impossible challenge to their resources as a real-world police force does.

For example, in the online world it’s common for a website run by just a handful or less of people to reach thousands or even millions of visitors and this makes manual validation of every piece of user submitted content impossible, which forces such providers to adopt automated solutions (may not be affordable for smaller services).

Unfortunately such filters are just as likely to stifle free speech as they are to tackle nasty content. Lest we forget that there’s the very problem of how you define “hatred” and “terrorism” online in the first place and then separate it from related content that may include criticism of the same subject, as well as satire, the right to cause offence, political free speech and so forth (context is vital).

EuroISPA Statement

The guidelines endorse the trend that has seen policymakers across Europe force online intermediaries to play judge, jury and executioner with regard to online content control. As EuroISPA has consistently argued for 20 years, such privatised enforcement undermines due process and natural justice, a key underpinning for the enjoyment of fundamental rights.

Such core values would be further frustrated by any move towards a notice & staydown regime (a possibility the guidelines leaves open), whereby Internet intermediaries would be forced to undertake ex ante monitoring and filtering third-party content upload on their networks.

Today’s guidelines fail to appreciate the reality that while the Internet is a global public sphere that empowers citizens and grows economies, standards of illegality are defined on a country-by-country basis. This creates a major dilemma for Internet service providers (ISPs), as they are simply unable to properly assess the context-dependent legality of content – in this regard it is worth stressing that the overwhelming majority of Internet infrastructure companies in Europe are SMEs.

The need for clear and specific judicial guidance on whether a piece of content is illegal is particularly important in the context of so-called ‘controversial content’, as such content can often be presented in non-local languages and framed in varying political and cultural contexts (e.g. parody, free expression, hate speech, etc).

Without this judicial clarity, ISPs are trapped between the risk of failing to properly identify illegal content and the risk of engaging in excessive censorship, thus undermining the fundamental rights of their users. The overwhelming majority of citizens use the Internet for its inherently empowering characteristics. And in that context, we must ensure that structures are in place such that ISPs’ efforts to remove illegal content do just that, and not more.

On the flip side the EuroISPA’s call for policy action to ensure that “illegal” content on the Internet is properly policed within the framework of due process (courts etc.) is all well and good, although the historic problem has always been that the courts simply cannot keep up with the amount of content being generated by billions of people.