Bots to Bring Doom to Democracy or a New Song for the Same Old Dance?

20 million active Twitter accounts are fake. 20 million opinions, retweets and participants in political movements are fake. That is according to only one article, it seems likely that there are many more. According to the scholars Laquintano and Vee between ¼ and 1/3 of users in support of Donald Trump or Hillary Clinton were fake bots as of 2016.

Example of an Obvios Bot Tweet

Example of an Obvios Bot Tweet

So, the simple conclusion is truth is doomed, right? How can the millions who receive their news and political opinions from the “unbiased and democratic” Twitter expect to make informed voting decisions if they are not actually engaging in civil discourse but capitalistic vote manipulation through conversation with Twitter bots?

Well, the truth is the cards have been stacked against voters. Capitalism has always had a heavy hand in politics. From the very beginning voting itself was restricted to those who had a large economic stake in America: white male landowners. As voting rights expanded more and more methods of influencing less affluent voters developed. The most obvious is advertisements from newspapers to radio ads or large donations to politicians’ campaigns.

Political Advertising During the Great Depression

Political Advertising During the Great Depression

Today, however, this manipulation is more subtle than ever. According to Laquintano and Vee many of the fake bots used to sway political opinions do so by being able to pass a Turing test or through their sheer numbers. In other words, bots can pass as real humans as determined by unknowing humans. If a bot is discoverable often there are so many of these bots their discovery is inconsequential to their movement.

So, manipulation has always existed in American democracy. The only difference is now it is not obvious where it is coming from. For example, further subtle manipulation in politics may be vote counting itself, as many sources indicate Russian tampering with the 2020 Presidential election vote counts.

90% of news outlets are owned by just six companies. If anyone remembers the play Newsies, it will simply take collusion between those six firms for major social justice issues or pieces of news to be ignored and unnoticed by Americans. Not to mention if these firms ever decided to cover an issue in a certain manner to sway votes, they could entirely sway the views of most Americans.

Joseph Pulitzer: Villian of Newsies Who Colluded With Other Newspapers to Stop the Newsies' Strike

Joseph Pulitzer: Villian of Newsies Who Colluded With Other Newspapers to Stop the Newsies’ Strike

So, that about covers it. News outlets, social media, and even voting itself may be shot. Undoubtedly historians who will study political literature in the future will have an extremely difficult time deciding which news articles, Tweets and even social movements were entirely fashioned by capitalistic stakes in politics. So, yes, we are doomed, just as doomed as the political system always has been in America.

All this means for us is that the literature of past political movements was a bit more genuine. Today any political literature is less about fairness or equality than it is about greasing the pockets of whoever is interested in some manner we are yet to understand. So, how to stay sane? Unplug. I have never heard someone tell me reading political news has made them happy. In fact, I can say from experience only the opposite is true. As long as, America can make someone richer than us a buck, our system will work and we will have bread on the table.

Millet: Angelus

Millet: Angelus– A Couple Prays in Thanksgiving for Their Day’s Work and Harvest Through the Angelus Prayer, Evocative of the American Ethos

Are Bots Brainwashing Us?

Hey everyone,

If you’ve been closely following politics and reading the news over the past four years, you’ve probably at least heard of bots. But what exactly are they? And why do they matter?

A bot is an autonomous program on the internet or other network that can interact with systems and users. A bot can be programmed to do all sorts of things like write tweets about specific subjects on twitter at a specific time each day. A bot network or “botnet” is a group of these bots who work in concert with each other at the behest of whoever programmed them.

Bot network

Bot network

What’s worrisome about these botnets is that they’re becoming shockingly realistic and more difficult to distinguish from real human users, and real human users are having their views influenced by these bots employed by dishonest political actors both foreign and domestic. It is widely agreed upon by reputable sources that the past two U.S. elections (the 2016 election in particular) have been heavily influenced by botnets designed to manipulate public opinion. Unwitting social media users are being bombarded with dishonest propaganda from these botnets on a daily basis.

In summary, make sure you’re getting your info from real people! If you want to know more about this subject, Timothy Laquintano and Annette Vee have done an invaluable in-depth study that it would behoove all to read.

Just a friendly heads up for all you political junkies out there. Peace!

Bot or Not? Ethical Questions on the Use of AI Writing Bots

If you’ve ever been on the wrong end of a customer service call, then you know how frustrating it can be to talk to a robot. Simply giving one’s name can cause a panic over the threat of the dreaded, “Sorry, could you repeat that?”

Currently, it’s pretty easy to tell when you’re conversing with a robot. But what about when it comes to informative writing, like news reporting via articles and social media? Would you trust a robot with your news? And could you even tell a robot writer from a human one?

Several big-name news outlets—like Bloomberg, Forbes, and the Washington Post—have been employing AI writers for years now, which cover less important stories or complete first drafts for journalists.

This 2020 article from the Guardian, written by a robot explaining its peaceful intentions, generated a hefty amount of buzz on social media. Many might have believed it to be the writing of a human, if the robot didn’t identify itself in the first paragraph.

But critics of the article argue that this robot doesn’t actually understand what it’s saying or how all its points intertwine to form a solid argument. As a deep learning device, the Guardian’s bot is simply mimicking effective writing it’s been spoon-fed, which raises another ethical dilemma: if these bots do not really understand what they’re saying, if they’re simply simulating “good” reporting, can we still trust them with our news?

Financial articles have been written entirely by robots since as early as 2015, because the robots only have to compile numbers into simple sentences. The bot writing in this 2017 article from the Associated Press seems to pass the Turing Test. So, if these robots are able to take information and present it in basic human language, what happens when they are fed false information?

In their article “How Automated Writing Systems Affect the Circulation of Political Information Online,” Timothy Laquintano and Annette Vee detail how deep-learning bots similar to the Guardian’s are able to fabricate believably human social media accounts and then amplify misinformation. Even though the robots may not know what they’re saying, we may be susceptible to believe them.

An essential question we must ask is: how transparent should news outlets be regarding AI writing? The financial article had a disclaimer at the end of the article, but who really makes it all the way to the end?

Moreover, we must consider where we draw the line in terms of what AI bots are allowed to write. AI bots like this one are already capable of writing student’s papers for them, while similar systems currently grade papers at universities. If academic writing simply becomes AI graders evaluating AI writers, then what is the point?

Ultimately, we must consider how to ethically integrate AI writers into our writing ecologies, as well as how to preserve the integrity of truth and authenticity in written discourse.Two white, robot hands rest on a white Apple keyboard. Various creases at the joints and tiny screws are visible on the hands.

Meme Weaponization & the Future of Warfare

It sounds silly, but memes might be the future of warfare. 

No really—disinformation online is a global concern with real-world impacts. Memes are just another weapon on the digital battlefield. 

I guess it’s not entirely correct to say that memetic warfare is a thing of the future. Because, well, it’s already happening.

Disinformation Kill Chain and Response Framework from Department of Homeland Security https://www.dhs.gov/sites/default/files/publications/ia/ia_combatting-targeted-disinformation-campaigns.pdf

Political memes shaped the 2016 presidential election – hate groups love hijacking memes and appropriating them into hate symbols – ASPI discussed the use of memes as propaganda for extremist movements in their Counterterrorism Yearbook 2021 – and NATO has repeatedly acknowledged the burgeoning threat information warfare poses (most notably here).

Memes have power. And bad actors are abusing them.

What is it that makes memes so damn easy to weaponize? Why are they this effective at spreading disinformation and influencing human behavior? 

It’s probably too complicated for me to address in a succinct and comprehensive way. But I can say, speed and audience size are big factors. 

Here’s the super-mega-ultra abridged version:

Troll factories, bots, and fake news all play a role in memetic warfare. 

As many of you already know, bots can reach a wide audience and require little time and effort from humans to do it. Timothy Laquintano and Annette Vee put it best.

“Although social networks and online forums, where much of public discourse now takes place, enable greater access to participation for everyday writers…the current scene includes more aggressive intervention by nonhuman actors, such as bots, that generate writing. Humans are,  of course, usually responsible for authoring the computational processes that generate writing…, but by making certain aspects of online writing computational, human authors can typically operate with greater speed, scale, and autonomy”

Humans participate in propaganda, espionage, and the like. This isn’t new, certainly not to warfare. Instead of the traditional places, though, you can now find these dehumanizing tactics in memes. And it’s precisely because bots are so good at what they do.

Experiencing Bots in Everyday Life

While the world we live in today relies heavily on digital media and technology, the digital era has become more prominent amidst the COVID-19 pandemic. Humans now depend on technology to work, to learn, to read and to obtain entertainment now more than ever.

Artificial Intelligence (AI) exists all around us, even if it may not be realized. IOS developer and Computer science graduate, Ilija Mihajlovic, talks about the impact bots have on our everyday lives in his article How Artificial Intelligence is Impacting our Everyday Lives.  He states, “AI assists in every area of our lives, whether we’re trying to read emails, get driving directions, get music or movie recommendations (Mihajlovic, 2).”

One place we experience the use of bots in our daily life is through digital assistants. First developed within iPhones as the well-known AI, Siri, digital assistants have since then been created through various platforms: Alexa, Google Now, Microsoft’s Cortana, and more. While these digital assistants may be used on different sorts of sites, they still all serve the same purpose, and that is to assist.

Although quite annoying, most people have likely encountered the security measures taken to enter most websites at least once. The websites that use these these types of security checks solidify their reason by saying “are you a robot?” We take the tests to ensure to the robots running the sites that we are not robots.

HackTX 2018 Puzzle 3: Shopping Cart | by Florian Janke | Medium

Some of the different security tests that we use include the image provided where you have to choose all the cats; a combination of letters and numbers that you have to type into an answer box; or simply just a check box with the phrase, “I’m not a robot,” beside it. While it has been said that bots may be grading and writing first drafts of our papers (according to McKee & Porter), they are still unable to distinguish the difference between a cat and a dog.

As our world continues to make technological advances, the use of bots in our everyday lives will become more substantial. Heidi McKee and Jim Porter discuss the role that many bots and AI’s will take in the near future in their article The Impact of AI Writing and Writing Instruction. Bots will eventually be used in the workplace and in our classrooms; but the question is: is it such a bad thing for our society to rely on the use of bots as we go about our day?