If you’ve ever been on the wrong end of a customer service call, then you know how frustrating it can be to talk to a robot. Simply giving one’s name can cause a panic over the threat of the dreaded, “Sorry, could you repeat that?”
Currently, it’s pretty easy to tell when you’re conversing with a robot. But what about when it comes to informative writing, like news reporting via articles and social media? Would you trust a robot with your news? And could you even tell a robot writer from a human one?
Several big-name news outlets—like Bloomberg, Forbes, and the Washington Post—have been employing AI writers for years now, which cover less important stories or complete first drafts for journalists.
This 2020 article from the Guardian, written by a robot explaining its peaceful intentions, generated a hefty amount of buzz on social media. Many might have believed it to be the writing of a human, if the robot didn’t identify itself in the first paragraph.
But critics of the article argue that this robot doesn’t actually understand what it’s saying or how all its points intertwine to form a solid argument. As a deep learning device, the Guardian’s bot is simply mimicking effective writing it’s been spoon-fed, which raises another ethical dilemma: if these bots do not really understand what they’re saying, if they’re simply simulating “good” reporting, can we still trust them with our news?
Financial articles have been written entirely by robots since as early as 2015, because the robots only have to compile numbers into simple sentences. The bot writing in this 2017 article from the Associated Press seems to pass the Turing Test. So, if these robots are able to take information and present it in basic human language, what happens when they are fed false information?
In their article “How Automated Writing Systems Affect the Circulation of Political Information Online,” Timothy Laquintano and Annette Vee detail how deep-learning bots similar to the Guardian’s are able to fabricate believably human social media accounts and then amplify misinformation. Even though the robots may not know what they’re saying, we may be susceptible to believe them.
An essential question we must ask is: how transparent should news outlets be regarding AI writing? The financial article had a disclaimer at the end of the article, but who really makes it all the way to the end?
Moreover, we must consider where we draw the line in terms of what AI bots are allowed to write. AI bots like this one are already capable of writing student’s papers for them, while similar systems currently grade papers at universities. If academic writing simply becomes AI graders evaluating AI writers, then what is the point?
Ultimately, we must consider how to ethically integrate AI writers into our writing ecologies, as well as how to preserve the integrity of truth and authenticity in written discourse.