No one wants murder videos on Facebook. But no one wants Facebook to censor their baby videos, either. Technology isn’t ready to step in and tell the difference. So what are the legal options for stopping videos like the appalling killing uploaded last week from hitting Facebook? None of them will be easy for Americans to swallow.

The country could regulate Facebook like it does traditional broadcasters and media by holding the company accountable for the country it distributes. Or legislators could create new laws to criminalize the amplification of violence on social media. But doing so could have drastic repercussions, impinging on freedom of expression online and discouraging the dissemination of videos that shine on a light on atrocities, among other things. The entire internet would change.

“We want a free and open internet, and we want a space that we aren’t paying a subscription for,” says Kate Coyer, a fellow at Harvard’s Berkman Klein Center for Internet and Society and an expert online extremism. “But we also don’t want to encounter some of the worst elements of humanity on there. … At a certain point we may have to make a compromise.”

This most recent video is proof that technology moves fast, but not fast enough to preempt the ways humans will use and abuse it. Unfortunately neither can that slow bastion of humanity, the law. It’s a game of catch-up, and attempts at creating real safeguards remain far behind.

Back in the salad days of the Internet, lawmakers saw the problems presented by the ease and ubiquity of this new medium approaching fast. In 1996, the internet was a promise wrapped in a mystery inside a tangle of cords and cables. “Congress looked and said oh god this is going to be a child porn nightmare,” says law professor Mary Anne Franks, an expert in free speech and digital rights at the University of Miami. So they drafted the Communications Decency Act, a sweeping ruling against obscenities and child pornography. Free speech advocates and companies pushed back, which resulted in the created of Section 230 of the Act.

“Section 230 was created as a shield for online intermediaries to avoid liability for regulating harmful content. It is controversial because it now mostly operates as a sword to allow online intermediaries to avoid liability for not regulating harmful content,” says franks. “It started out as a way to try to put some regulation on the internet, and it ended up becoming the thing that keeps us from being able to regulate the internet.”

The law now serves as the backbone of the open internet. Companies like Facebook don’t have to worry about being sued for something you do on its site—even if what you do is murder someone. The clear exceptions to that immunity carved out by the courts are child pornography and copyright infringement. The law still holds tech platforms accountable for such ills. But in the most common interpretation of the statute, everything else is outside its scope of culpability.

The Legal Options

Still, if Americans decided that never seeing murder on Facebook trumped the freedom to upload anything, they could push for legal and regulatory steps. But each is riddled with downsides.

Lawmakers could amend Section 230 to expand companies’ liability. If the law designated Facebook a content developer like a traditional media company, rather than an intermediary through which others post content, it could face penalties itself for what people put on its site. That’s in the same spirit as a law introduced in Germany to fine social media platforms for allowing hate speech or violent content to remain on its site.

If Facebook feared legal liability for videos of murder, it would have a much greater incentive to take them down. That threat might have helped last week, but it would also encourage companies to err on the side of caution, taking down any and every video it feared could be even a potential violation. Such a stance would have a terrible chilling effect on internet freedom. Witness videos of police brutality or images of war protest could be lost in exchange for a perfectly anodyne Facebook timeline. When state attorneys general in 2013 asked congress to amend the law in this way, civil libertarians objected. The American Civil Liberties Union wrote, “Their misguided proposal threatens to undermine the legal regime that has allowed speech to flourish.”

Franks says there’s no need to go that far. Rather, she argues current courts could interpret the existing statute more broadly, as they have done in individual cases. In the case of Roommates.com vs. Fair Housing Council, for instance, the site gave users a drop-down menu that enabled them to easily racially profile roommates. The court found that they had no Section 230 immunity as a result. Though the cases are not analogous, Franks would like to see courts be more willing to find that when an online intermediary “does more than just provide a platform for unlawful content—when it actively and knowingly promotes or solicits such content—that they would recognize that Section 230(c) does not apply.”

“I wish courts would get a little nervier about this,” she says. Though she is clear that she is not suggesting that by merely offering a product such as Facebook Live Facebook is actually liable for content.

In an op-ed, lawyer and CNN analyst Danny Cevallos suggested a third approach: that the US could draft a law specifically criminalizing the act of uploading video of committing a murder. On its face that idea is tantalizingly appealing: Rather than force Facebook to take these videos down, it would discourage criminals from uploading them. If they tried to amplify their crime by posting it on social media, the penalty would be more severe. Cevallos argues that, if crafted carefully, this law would not violate the first amendment, and would be similar to existing laws that criminalize profiting of a crime.

But even if it was strictly constitutional, the law would run the risk of criminalizing witness and victim videos, like the one Diamond Reynold’s uploaded of Philando Castile’s shooting by police. That would be a tragedy on a grand scale.

“I would not want a victim who tried to post video in order to find or help prosecute assailants to be further victimized or threatened for making that posting,” says attorney Wendy Seltzer, an expert in free expression online. Any such law would need to be tailored narrowly to apply only to perpetrators, not to victims, bystanders, or news organizations. Maybe that’s possible, but it’s asking lawmakers to not only draft a nuanced order that would perfectly understand the technological realities of 2017, but that would also nimbly and safely apply as technology continues to advance. It’s a steep order. And it might not even work. Someone uploading a video of themselves killing a person is probably not going to be dissuaded by the thought that doing so may add a few years to their sentence.

And that really gets at the heart of the dilemma: The vast majority of people won’t ever upload a crime to Facebook, but any regulation drafted to stop the few who do will completely change the internet for everyone. Ultimately, you and every other citizen will have to decide if the trade-off is worth it.

source: https://www.wired.com/2017/04/face-l...cebook-either/