Fallout from the ugliness on parade in Charlottesville earlier this month is challenging internet free speech.

Last week, we voiced support for congressional legislation aimed at websites such as Backpage that trade in sex-trafficking ads. Backpage has taken advantage of a loophole in federal law that shields the company from legal responsibility for hosting illegal content.

The argument against this legislation, which would close the loophole, is that it could lead internet providers to censor other types of user-provided content.

But that’s already happening.

Post Charlottesville, a number of internet companies have moved to bar content from “alt-right” groups and their leaders. They say they are under no obligation, legal or otherwise, to provide a digital venue for hate speech distributed by white supremacists, neo-Nazis and their ilk.

And it’s true: private companies such as Facebook have every legal and ethical right to ban certain groups from posting hateful or illegal content. Would anyone argue that Facebook should allow posts by say, ISIS? The company can set its own rules about how its services and platform will be used, and by whom.

At the same time, is Facebook, which counts nearly a third of the world’s population within its user base, up to the task of deciding what content is acceptable and what is not? One minefield: Company officials have to answer to shareholders who want to see growth in users and profits. Will this relationship affect what Facebook content editors choose to allow? And if the company’s decision to arbitrate free speech angers significant numbers of customers, or advertisers, will that determine the kinds of content allowed?

Then there’s the issue of the internet infrastructure. Days after Charlottesville, web domain registrars GoDaddy and Google separately decided they would no longer support the Daily Stormer, a neo-Nazi site that hosted a story mocking Heather Heyer, the woman who died after being struck by a car while protesting the “alt-right” rally. They’ve also stopped hosting sites put up by individual “alt-right” leaders.

Without the infrastructure, content providers can’t reach an audience.

It’s a fair question about how tech companies, with their vast audiences and almost unlimited wealth, are becoming the arbiters for free speech in America.

Their actions are already spawning a reaction, as right-wing tekkies have been reportedly working on parallel digital platforms where their message can go out unimpeded.

For years, newspapers fought for the free flow of news and opinion outside the influence of advertisers. But these were primarily news venues, not massive, globally reaching technology companies with little training or tradition in free speech issues.

Even the American Civil Liberties Union has drawn fire from progressive supporters after acting in support of the Charlottesville “alt-right” organizers, who were originally denied a permit to gather. After days of criticism, the ACLU came out late last week with a statement saying it would not defend groups that wanted to incite violence or that carried weapons during a march or rally. Three California ACLU affiliates said the organization believes “the First Amendment does not protect people who incite or engage in violence.”

But another danger is if the “alt-right” goes digitally dark in terms of mainstream platforms and social media. Then hateful opinions uttered through an “alt-internet” would be shielded from opposite viewpoints — and monitoring.

In a Washington Post story published after Charlottesville, Lee Rowland, an ACLU staff attorney, cautioned consumers against being quick to condemn companies that host even the “most vile white supremacist speech we have seen on display ...

“We rely on the Internet to hear each other,” Rowland said. “We should all be very thoughtful before we demand that platforms for hateful speech disappear because it does impoverish our conversation and harm our ability to point to evidence for white supremacy and to counter it.”