0 6 min 12 mths

[ad_1]

by Joe Mullin, Activist Post:

On the biggest internet platforms, content moderation is bad and getting worse. It’s difficult to get it right, and at the scale of millions or billions of users, it may be impossible. It’s hard enough for humans to sift between spam, illegal content, and offensive but legal speech. Bots and AI have also failed to  rise to the job.

So, it’s inevitable that services make mistakes—removing users’ speech that does not violate their policies, or terminating users’ accounts with no explanation or opportunity to appeal. And inconsistent moderation often falls hardest on oppressed groups.

The dominance of a handful of online platforms like Facebook, YouTube, and Twitter increases the impact of their content moderation decisions and mistakes on internet users’ ability to speak, organize, and participate online. Bad content moderation is a real problem that harms internet users.

TRUTH LIVES on at https://sgtreport.tv/

There’s no perfect solution to this issue. But U.S. lawmakers seem enamored with trying to force platforms to follow a government-mandated editorial line: host this type of speech, take down this other type of speech. In Congressional hearing after hearing, lawmakers have hammered executives of the largest companies over what content stayed up, and what went down. The hearings ignored smaller platforms and services that could be harmed or destroyed by many of the new proposed internet regulations.

Lawmakers also largely ignored worthwhile efforts to address the outsized influence of the largest online services—like legislation supporting privacy, competition, and interoperability. Instead, in 2021, many lawmakers decided that they themselves would be the best content moderators. So EFF fought off, and is continuing to fight off, repeated government attempts to undermine free expression online.

The Best Content Moderators Don’t Come From Congress

It’s a well-established part of internet law that individual users are responsible for their own speech online. Users and the platforms distributing users’ speech are generally not responsible for the speech of others. These principles are embodied in a key internet law, 47 U.S.C. § 230 (“Section 230”), which prevents online platforms from being held liable for most lawsuits relating to their users’ speech. The law applies to small blogs and websites, users who republish others’ speech, as well as the biggest platforms.

In Congress, lawmakers have introduced a series of bills that suggest online content moderation will be improved by removing these legal protections. Of course, it’s not clear how a barrage of expensive lawsuits targeting platforms will improve online discourse. In fact, having to potentially litigate every content moderation decision will actually make hosting online speech prohibitively expensive, meaning that there will be strong incentives to censor user speech whenever anyone complains. Anyone who’s not a Google or a Facebook will have a very hard time affording to run a website that hosts user content, that is also legally compliant.

Nevertheless, we saw bill after bill that actively sought to increase the number of lawsuits over online speech. In February, a group of Democratic senators took a shotgun-like approach to undermining internet law, the SAFE Tech Act. This bill would have knocked out Section 230 from applying to speech in which “the provider or user has accepted payment” to create the speech. If it had passed, SAFE Tech would have both increased censorship and hurt data privacy (as more online providers switched to invasive advertising, and away from “accepting payment,” which would cause them to lose protections.)

The following month, we saw the introduction of a revised PACT Act. Like the SAFE Tech Act, PACT would reward platforms for over-censoring user speech. The bill would require a “notice and takedown” system in which platforms remove user speech when a requestor provides a judicial order finding that the content is illegal. That sounds reasonable on its face, but the PACT Act failed to provide safeguards, and would have allowed for would-be censors to delete speech they don’t like by getting preliminary or default judgments.

The PACT Act would also mandate certain types of transparency reporting, an idea that we expect to see come back next year. While we support voluntary transparency reporting (in fact, it’s a key plank of the Santa Clara Principles), we don’t support mandated reporting that’s backed by federal law enforcement, or the threat of losing Section 230’s protections. Besides being bad policy, these regulations would intrude on services’ First Amendment rights.

Last but not least, later in the year we grappled with the Justice Against Malicious Algorithms, or JAMA Act. This bill’s authors blamed problematic online content on a new mathematical bogeyman: “personalized recommendations.” JAMA Act removes Section 230 protections for platforms that use a vaguely-defined “personal algorithm” to suggest third-party content. JAMA would make it almost impossible for a service to know what kind of curation of content might render it susceptible to lawsuits.

Read More @ ActivistPost.com

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *