Internet firms’ internal rules and enforcement practices can be troublingly opaque and prone to arbitrary interpretation.

This week’s march of white supremacists in Charlottesville, Va., which culminated in the death of civil-rights activist Heather Heyer by an apparent Nazi sympathizer, brought long-simmering U.S. racist politics into public view.

In doing so, it also raised the question of how we as a society should deal with hate speech, as well as the extent to which internet companies are becoming de facto regulators of online activity.

Following Reyer’s death and facing a public backlash, GoDaddy removed its domain services from the Daily Stormer, a white supremacist website, saying that the website violated GoDaddy’s terms of service after a post that mocked her death.

Google, which provided replacement domain services to the website, quickly followed suit. The Daily Stormer has since moved to the dark web, where it is more difficult to identify — and thereby pressure — its service providers.

Relying on internet companies to act as regulators is appealing because they can move swiftly and decisively to push undesirable actors and content out of sight to most people. When their actions target groups as dangerous and reprehensible as white supremacists, it is tempting to simply cheer this as a victory.

However, it also raises difficult questions about the extent to which we as a society are increasingly relying upon a handful of major internet companies to police a broad array of social problems, not all of which rises to the level of violent hate speech.

GoDaddy and Google’s actions are significant because this is not an isolated example of large internet firms targeting some bad actors. Rather, they reflect a growing trend of large, mostly U.S. internet companies acting as global regulators of online activity, raising issues of accountability and the arbitrary nature of their actions.

The appeal of relying on these companies to police bad behaviour is obvious. These companies have become go-to regulators for legislators around the world because they have significant regulatory latitude through their terms-of-service agreements to remove any speech or ban any users they deem in violation of their rules.

Because these companies can work through their terms of service, government officials are calling upon them to address social problems ranging from illegal gambling, copyright infringement to child sexual abuse content, hate speech and “fake news.”

Crucially, however, in many cases, these internet firms are removing content and terminating their services in the absence of actual specific legislative requirements; that is, “voluntarily,” and in the absence of any judicial process.

The Electronic Frontier Foundation, a U.S.-based digital-rights group, has critiqued such efforts as shadow regulation that can have the force of law, but not its transparency or accountability.

The violence in Charlottesville demonstrates again that companies respond to public criticism, especially in high-profile cases. But is rule by public protest, which at its worst is mob mentality, where one website or group is arbitrarily targeted while another is overlooked how we want to govern the internet?

Industry-led enforcement campaigns also often lack rigorous accountability measures. Devolving enforcement responsibility to internet companies is useful for government wishing to sidestep public demands for regulatory oversight. However, internet firms’ internal rules and enforcement practices can be troublingly opaque and prone to arbitrary interpretation.

Facebook’s leaked rules reveal the company’s complex processes for determining content as hate speech and highlight its dependence on overworked, underpaid content moderators who have only seconds to flag objectionable material. As a result, internet firms are inaccurately removing lawful, inoffensive content.

While we may welcome enforcement action against violent hate speech, we should recognize that internet companies have too-often acted to stifle peaceful, inoffensive speech criticizing governments and law enforcement.

Rules first enacted against the most reprehensible behaviour — terrorism, child sexual abuse, and hate speech — are often expanded to target other forms of speech. What will we do when the censors come for controversial or confronting speech that we support?

To be absolutely clear, I do not support white supremacists. My argument here is that there is a broader role for government to play in determining how content and behaviour on the internet should be regulated and by whom.

There is also a critical role for public debate to determine how the internet should be governed. Simply off-loading responsibility to companies like Google and GoDaddy to react to public pressure may have gotten the job done in this specific case, but in the longer term, it represents a troubling, potentially dangerous policy choice.