Bot or Not? Ethical Questions on the Use of AI Writing Bots

If you’ve ever been on the wrong end of a customer service call, then you know how frustrating it can be to talk to a robot. Simply giving one’s name can cause a panic over the threat of the dreaded, “Sorry, could you repeat that?”

Currently, it’s pretty easy to tell when you’re conversing with a robot. But what about when it comes to informative writing, like news reporting via articles and social media? Would you trust a robot with your news? And could you even tell a robot writer from a human one?

Several big-name news outlets—like Bloomberg, Forbes, and the Washington Post—have been employing AI writers for years now, which cover less important stories or complete first drafts for journalists.

This 2020 article from the Guardian, written by a robot explaining its peaceful intentions, generated a hefty amount of buzz on social media. Many might have believed it to be the writing of a human, if the robot didn’t identify itself in the first paragraph.

But critics of the article argue that this robot doesn’t actually understand what it’s saying or how all its points intertwine to form a solid argument. As a deep learning device, the Guardian’s bot is simply mimicking effective writing it’s been spoon-fed, which raises another ethical dilemma: if these bots do not really understand what they’re saying, if they’re simply simulating “good” reporting, can we still trust them with our news?

Financial articles have been written entirely by robots since as early as 2015, because the robots only have to compile numbers into simple sentences. The bot writing in this 2017 article from the Associated Press seems to pass the Turing Test. So, if these robots are able to take information and present it in basic human language, what happens when they are fed false information?

In their article “How Automated Writing Systems Affect the Circulation of Political Information Online,” Timothy Laquintano and Annette Vee detail how deep-learning bots similar to the Guardian’s are able to fabricate believably human social media accounts and then amplify misinformation. Even though the robots may not know what they’re saying, we may be susceptible to believe them.

An essential question we must ask is: how transparent should news outlets be regarding AI writing? The financial article had a disclaimer at the end of the article, but who really makes it all the way to the end?

Moreover, we must consider where we draw the line in terms of what AI bots are allowed to write. AI bots like this one are already capable of writing student’s papers for them, while similar systems currently grade papers at universities. If academic writing simply becomes AI graders evaluating AI writers, then what is the point?

Ultimately, we must consider how to ethically integrate AI writers into our writing ecologies, as well as how to preserve the integrity of truth and authenticity in written discourse.Two white, robot hands rest on a white Apple keyboard. Various creases at the joints and tiny screws are visible on the hands.

Siri the Spy: How Surveillance Capitalists Reinforce Sexist Stereotypes and Harvest Our Data Through Voice Assistants

If you’re comforted by the maternal nature of your smartphone’s voice assistant—good. That means it’s working. You’re falling right into her trap.

Wide research has exposed the blatant sexism that arises from the predominantly “female” voice assistants like Siri and Alexa. Just the decision to utilize female voices in the subservient, “pink collar” role of voice assistants reinforces sexist stereotypes of women as unskilled, servile laborers.

Of course, these voices are feminine because society wants them that way; research indicates that people prefer female voice assistants due to their nurturing, submissive responses. Ultimately, tech companies allow societal sexism to shape their products, instead of using their platform as a means of rewiring society’s implicit biases.

But why? Well, the fact that women only make up 12% of A.I researchers and 6% of software developers certainly doesn’t help. But arguably the largest reason behind sexist voice assistants is (as you can probably guess) money.

Sure, having feminine voices in a company’s tech products that satisfy the masses increases sales. But these comforting, maternal figures have an ulterior—yet still overwhelming capitalistic—motive: to pacify us into handing over our sensitive personal data.

Heather Suzanne Woods, in her article “Asking more of Siri and Alexa: feminine persona in service of surveillance capitalism,” demonstrates how these docile, feminine agents that reinforce gender stereotypes constitute an intentional (and successful) attempt by companies to profit off our data. Siri and Alexa lull us to sleep with their motherly charm, getting us to spill our most personal desires and interests without ever thinking twice.

The tech companies behind these products store this data, using it to display personalized advertisements based on your conversations with your cheery, female helper. Netflix documentary The Social Dilemma details how social media sites already harvest our data for personalized ads, so it’s no surprise that our voice assistants do the same. Moreover, these human-voice assistant conversations train our devices, teaching them how to placate us even more effectively.

And thus, surveillance capitalism is born. The vast amount of data we “willingly” supply to Siri and Alexa is scrutinized by machine learning technology, eventually forming predictions of our future behavior—AKA when we will be most receptive to those personalized advertisements.

But it doesn’t stop there. Shoshana Zuboff, the Harvard Business School professor who coined the term “surveillance capitalism” states how surveillance capitalists “learn to tune, herd, and condition our behavior with subtle and subliminal cues, rewards, and punishments that shunt us toward their most profitable outcomes” (read the full interview with Zuboff in The Harvard Gazette). Every suggestion, every notification from our voice assistants has monetary motivations.

Just when we thought we had the authority over our voice assistants, it turns out they (thanks to surveillance capitalists) were the ones influencing our actions the whole time.

So, the next time you go to ask Siri for pizza recommendations or to pick Alexa’s brain on tropical getaways, you might want to think about just who else is listening.

The image is divided into two halves. On the left, a black background surrounds a spherical speaker, commonly known as an Amazon Alexa device. A blue ring is lit up around the top edge of the device. On the right side of the image, a blue background surrounds the symbol associated with Apple's voice assistant Siri: a bluish-white, disfigured star, with the brightest white in the middle of the shape. The disfigured star is surrounded by an incomplete pink circle.

The Anti-Social Writing Process

Is it possible to be anti-social in today’s world?

Even when we’re lying in bed being “anti-social,” what are we really doing? Staring at our phones, scrolling through social media? Reading books full of words and messages?

Human writing practices seem to be innately social—we write so that our words may be read.

Marilyn Cooper profoundly comments on this social nature of writing in her essay “The Ecology of Writing.” In it, she rejects the previously postulated cognitive process model of writing as being too internalized and solitary. Instead, she argues that writing occurs within a complex, reciprocal ecology, in which readers shape the process of writing and writers shape that of reading. In other words, this ecological model hinges on social interaction between writers and readers.

But perhaps humans can be a little too social, reading communications they weren’t supposed to. In response to this need for increased privacy, I would argue that an anti-social writing process has arisen: encryption.

Encryption is the scrambling or encoding of data to prevent unauthorized entities from reading it; only the parties with the key to unlock the encryption can interpret the data. Search engines do it all the time to protect our precious data, which they claim not to sell but still manage to monetize and distribute to advertisers. Ohio State physics professor Dan Gauthier created “tamper-proof” encryption for drones, exploiting minute discrepancies in drones’ microchips to make data purportedly impossible to read.

The reasoning for encryption’s proposed status as an anomalously anti-social writing process stems from the separation of the writing processes of data and encryption; the data being encrypted already exists in a legible form—it has already been written. Encryption is a subsequent, discrete writing process aimed entirely at making this data unreadable (see the image below).

This image shows the process of encryption. On the very left, a symbol of a blue sheet of paper is labeled "original data" with an arrow pointing to a gray sheet of paper, labeled "scrambled data." In between these two pieces of paper is a symbol of a gray key labeled "public key" under the line of the arrow and the word "encryption," above the line of the arrow. From the right side of the gray paper labeled "scrambled data," another arrow leads to a blue sheet of paper at the right edge of the photo, also labeled "original data." Straddling this second arrow is another key symbol, this one blue and labeled "private key," as well as the word "decryption." In short, the original data is encrypted into scrambled data, which is then decrypted back into the original data using the key.

But what about other forms of private writing, like diaries or Morse Code? Couldn’t these be considered anti-social writing processes?

As for diaries, the societal taboo or expectation of privacy makes them anti-social, not the physical writing itself; a diary user inevitably enters their writing into the social conversation by creating the possibility of it being read.

With Morse Code, the act of encoding causes you to inevitably transcribe the message—you still spell out all the words, just with different symbols (and hence participate in social writing). But encryption comes in after the message has been written and simply scrambles or locks the preexisting data. The text of encryption itself is often a series of algorithms, not a message.

Yet, despite the anti-social nature of encryption, it is still reactionary and reciprocal, like the writing and reading that takes place in Cooper’s ecology. Encryption must adapt as unauthorized “readers” become better at “reading” encrypted data, and these “readers” must also adapt if they want to intercept and interpret encrypted data.

Surely this is not a position Cooper expected to defend—an anti-social writing process infiltrating her social ecology of writing—yet it is an interesting one nonetheless.

Comical Histories and Influences: From Cave Paintings to Viral Tweets

What do a nerd bitten by a radioactive spider, a redheaded football player in a deathly small town, and a galumphing Great Dane all have in common?

Answer: Comics.

Peter Parker (AKA Spider-Man), Riverdale’s Archie Andrews, and the chaotic canine Marmaduke all obtained their origin stories from comic books that have since been adapted into blockbuster movies or bingeworthy TV shows.

Comic books themselves may seem like a niche market nowadays, but their influence immerses us in ways we may not have previously considered. Countless pop culture items across the world—such as anime, graphic novels, and even Twitter—sample techniques of comics (to see the world’s largest collection of cartoons and comics, visit The Billy Ireland Cartoon Library and Museum at The Ohio State University).

But where did comics come from?

Comics as we know them seem to have originated in 19th Century Europe, but their beginnings could be argued back all the way into ancient times. American cartoonist and comic theorist Scott McCloud gave a fascinating lecture at Harvard University, during which he discussed the various histories and discussions surrounding comics.

McCloud pointed to several early examples as ancient influences of comics. In cave paintings that predate 6,000 BCE, Egyptians described the world around them in drawings of donkeys and people. In addition, researchers have been fascinated by the Codex Borbonicus, a 500-year-old Mesoamerican document that used pictures to describe cycles of the Aztec calendar.

Although lacking the words of contemporary comics, these examples rely heavily on images to drive a narrative. This appreciation for visual storytelling has persisted all the way to present day, where TV and movies dominate mainstream entertainment, and even the text-driven narratives of popular books are often translated into screen adaptations. Throughout history, despite advancement in how we compose visual media, the fondness for it has remained the same.

Speaking on the linear, continuous narrative presented in these ancient comics, McCloud was fascinated by how “the story determined the shape.” Contrarily, for the characteristic squares neatly confining newspaper comics, McCloud argues that “technology [in the 1990s] was determining the shape.” He voiced excitement as to how comics would evolve with advancing technology.

Comic influences can now be seen on Twitter; some of the most viral Tweets capitalize on the affordance of combining images and texts (check out Katherine Everett’s post detailing how Twitter changed the way we write).

Some feature a caption in the text portion of the Tweet, followed by a series of images, like this one. Others feature screen grabs of movies or TV, with the captions included on the image. These Tweets mimic the structure of conventional newspaper comics, and some even appear to be straight out of a comic book, like the one on the right.

Perhaps comics did not evolve the way McCloud hoped, returning to squares of pictures and text instead of flowing, boxless stories. But maybe this is because we have movies and television to satisfy our desire for linear narratives, and thus we are content with keeping comics in the little boxes.