Automated misinformation

In “How Automated Writing Systems Affect the Circulation of Political Information Online,” Timothy Laquintano and Annette Vee survey the online ecosystem of “fake news.” Writing in 2017, Laquintano and Vee concentrate on how fake news affected discourse surrounding the 2016 US presidential election. The authors’ concern for misinformation driven by automated systems of writing might have predicted the horrible events at the US Capitol on January 6, 2021.

After Trump supporters violently stormed the US capitol building on January 6, ten social media platforms temporarily or permanently banned accounts owned by the former president. Twitter responded to the permanent suspension of @realDonaldTrump saying, “we have permanently suspended the account due to the risk of further incitement of violence.”

Since then, C.E.O.s of giant tech companies, like Facebook, Twitter, and Google, are facing pressure from lawmakers and the public about their responsibility in mediating misinformation.

The Chief Executive Officers of Alphabet, Facebook, and Twitter testify virtually to congress

Sundar Pichai (Alphabet/Google), Mark Zuckerberg (Facebook), and Jack Dorsey (Twitter) virtually testify to congress

Currently, these companies are shielded from liability of what’s posted on their platforms by Section 230 of the Communications Decency Act of 1996. Section 230—which was enacted before the invention of Google—protects websites from being liable for content posted by third-party users.

According to Sundar Pichai, the chief executive of Alphabet, “Without Section 230, platforms would either over-filter content or not be able to filter content at all.”

This contested editorial ecosystem is at the heart of Laquintano and Vee’s 2017 article. The authors observe a shift from human-editorial writing practices to software-based algorithms that influence how information circulates. This shift becomes problematic because social media and tech ~companies~ prioritize user engagement.

Laquintano and Vee explain that these companies profit from user engagement through algorithms that curate content to individual users in attempt to maximize their screen time.

Previously on this blog, Christa Teston observed the material conditions that enable the online spread of information. I add that algorithmic “filter bubbles” created by social media and tech companies are another factor threatening public well-being via misinformation online.

The January 6 insurrection was an overt example of the dangers of the current online writing ecology. (There are still less publicized victims of online misinformation). Accordingly, Section 230 has become a contentious piece of legislation in the US, but it seems like both sides of the aisle are open to discussing its revision—for different reasons.

Leave a Reply

Your email address will not be published. Required fields are marked *