Wait…. you said who wrote this?!

Technology over the last 50 years has become very advanced. From 2 gigabytes storage being a milestone to a terabyte hard drive not being enough data in some cases. While these advancements have assisted in many fields such as medicine, programming and  the like, it has slowly eased its way into academia and particularly, writing.

Centers of Technology: The Future Is Now

The entertainment industry has shown us nothing but dystopian results to a future where AI has been integrated into society such as “Black Mirror” and “Love, Death and Robots“. It feels uneasy and unerving at time, that while we watch these shows for entertainment, could there be some truth in the manner?

Now of course we have the common AI’s that are seen as convenient like Siri and her sisters Alexa and Cortana. But within that realm, they are nowhere near the level of the complex AI’S being made that could essentially replace human tasks.

In the world of Academia, writing bots and AI based teachers are starting to become replacements of things humans do. According to “The Impact on Writng and Writing Instruction” by Heidi Mckee and Jim Porter, covers this topic in depth.

“For example, x.ai’s personal assistant scheduling bot, Amy/Andrew Ingram, in the context of email about meetings, is very often mistaken as human. In fact, some email correspondents have even flirted with Amy Ingram and sent her flowers and chocolatesSome poetry writing bots are already informally passing the Turing Test. ”

This is just one of the many examples where AI is becoming so real, it could get hard to differentiate between human text.

In an article written by a super AI called GPT-3 ,we see a small sample of just how complex AI writers can be.

What's the world's fastest supercomputer used for? | HowStuffWorks

“The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.”

“For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.”

This writing does seem a little odd because it feels as though a human wrote this in complete irony, and if an AI did write this it would be offputting as well. But the very fact that one may not sold on the idea of this being written by an AI just goes to prove the point of just how advanced these systems are becoming.

This then brings up the ultimate question of its impact on literacy in the sense that how would we effectively embed and intergrate these systems into this field without taking away from the knowledge and creativity humans offer?

There isn’t necessarily a fear of an uprising of robots in the future, but we should take precautionary measures into truly understanding the best way to make these systems work hand-in-hand and not for us.

 

 

 

Automated misinformation

In “How Automated Writing Systems Affect the Circulation of Political Information Online,” Timothy Laquintano and Annette Vee survey the online ecosystem of “fake news.” Writing in 2017, Laquintano and Vee concentrate on how fake news affected discourse surrounding the 2016 US presidential election. The authors’ concern for misinformation driven by automated systems of writing might have predicted the horrible events at the US Capitol on January 6, 2021.

After Trump supporters violently stormed the US capitol building on January 6, ten social media platforms temporarily or permanently banned accounts owned by the former president. Twitter responded to the permanent suspension of @realDonaldTrump saying, “we have permanently suspended the account due to the risk of further incitement of violence.”

Since then, C.E.O.s of giant tech companies, like Facebook, Twitter, and Google, are facing pressure from lawmakers and the public about their responsibility in mediating misinformation.

The Chief Executive Officers of Alphabet, Facebook, and Twitter testify virtually to congress

Sundar Pichai (Alphabet/Google), Mark Zuckerberg (Facebook), and Jack Dorsey (Twitter) virtually testify to congress

Currently, these companies are shielded from liability of what’s posted on their platforms by Section 230 of the Communications Decency Act of 1996. Section 230—which was enacted before the invention of Google—protects websites from being liable for content posted by third-party users.

According to Sundar Pichai, the chief executive of Alphabet, “Without Section 230, platforms would either over-filter content or not be able to filter content at all.”

This contested editorial ecosystem is at the heart of Laquintano and Vee’s 2017 article. The authors observe a shift from human-editorial writing practices to software-based algorithms that influence how information circulates. This shift becomes problematic because social media and tech ~companies~ prioritize user engagement.

Laquintano and Vee explain that these companies profit from user engagement through algorithms that curate content to individual users in attempt to maximize their screen time.

Previously on this blog, Christa Teston observed the material conditions that enable the online spread of information. I add that algorithmic “filter bubbles” created by social media and tech companies are another factor threatening public well-being via misinformation online.

The January 6 insurrection was an overt example of the dangers of the current online writing ecology. (There are still less publicized victims of online misinformation). Accordingly, Section 230 has become a contentious piece of legislation in the US, but it seems like both sides of the aisle are open to discussing its revision—for different reasons.

Meme Weaponization & the Future of Warfare

It sounds silly, but memes might be the future of warfare. 

No really—disinformation online is a global concern with real-world impacts. Memes are just another weapon on the digital battlefield. 

I guess it’s not entirely correct to say that memetic warfare is a thing of the future. Because, well, it’s already happening.

Disinformation Kill Chain and Response Framework from Department of Homeland Security https://www.dhs.gov/sites/default/files/publications/ia/ia_combatting-targeted-disinformation-campaigns.pdf

Political memes shaped the 2016 presidential election – hate groups love hijacking memes and appropriating them into hate symbols – ASPI discussed the use of memes as propaganda for extremist movements in their Counterterrorism Yearbook 2021 – and NATO has repeatedly acknowledged the burgeoning threat information warfare poses (most notably here).

Memes have power. And bad actors are abusing them.

What is it that makes memes so damn easy to weaponize? Why are they this effective at spreading disinformation and influencing human behavior? 

It’s probably too complicated for me to address in a succinct and comprehensive way. But I can say, speed and audience size are big factors. 

Here’s the super-mega-ultra abridged version:

Troll factories, bots, and fake news all play a role in memetic warfare. 

As many of you already know, bots can reach a wide audience and require little time and effort from humans to do it. Timothy Laquintano and Annette Vee put it best.

“Although social networks and online forums, where much of public discourse now takes place, enable greater access to participation for everyday writers…the current scene includes more aggressive intervention by nonhuman actors, such as bots, that generate writing. Humans are,  of course, usually responsible for authoring the computational processes that generate writing…, but by making certain aspects of online writing computational, human authors can typically operate with greater speed, scale, and autonomy”

Humans participate in propaganda, espionage, and the like. This isn’t new, certainly not to warfare. Instead of the traditional places, though, you can now find these dehumanizing tactics in memes. And it’s precisely because bots are so good at what they do.

Back to the Past: A COVID-Free World

Just when we’ve all barely managed to adjust to this new normal, we are mentally preparing ourselves to get back to some of our old ways. With vaccinations happening all around the world, we have plenty of reason, and hope, to start preparing to re-enter society.

Medical professional administering vaccine to someone

Medical professional administering vaccine to someone

As much as the internet has kept us laughing during these trying times, we are all looking forward to the day where we won’t have to sport a mask. While in-person activities will resume, they will not be without rules and regulations. Even now, as people are getting vaccinated, SOPs continue to be in place. This is because we are still trying to determine how long the vaccines will protect people.

It’s safe to assume that many of us have grown used to working from home, not meeting up with others, and sanitizing our hands (after everything we do). Only one of these things should be held on to in a post-pandemic world. No points for guessing which one.

Jokes aside, it will be strange to once again be in public spaces. The concept of shaking hands and hugging is already in somewhat alien territory. Let’s not even talk about kids who have grown up during this time; they think wearing a mask has always been the norm.

Complain all you must about those never-ending video calls, but the fact is that if it weren’t for them, we would have had zero contact with the outside world. While we’re still on the topic, apparently 30% of us don’t even bother changing into professional attire to take these calls. Don’t Zoom in too close!

Photo of four men dressed in professional shirts but just undergarments on the bottom half of their bodies. It says "Me and the boys ready for Zoom."

Photo of four men dressed in professional shirts but just undergarments on the bottom half of their bodies. It says “Me and the boys ready for Zoom.”

On a more serious note, even the economy seems to have acclimated to the pandemic. One company, in particular, has managed to secure a place in the history books during this unprecedented time. Zoom has been the go-to for all things work-, birthday-, and anniversary-related.

Interestingly, this piece from Colin Lankshear bears testament to the same. Talking about the features of what he calls a ‘new capitalism’, there is one aspect that stands out in particular.

He says, “sources of productivity depend increasingly on the application of science and technology and the quality of information and management in the production process.”

He goes on to state that the greatest innovations during the past thirty years have led the way for improved productivity.

The kicker, if you’d like to know, is that Lankshear wrote this in 1997.

What we can infer from this is that if the economy was so reliant on technology back then, we can only imagine what that means in the present. Two main things to factor in are:

  1. Technology has come a long way since 1997, and
  2. The pandemic has only fueled our dependence on it.

Pandemic or no pandemic, the world was already in the grips of technology. The past year of working and surviving under lockdown has proven that productivity has not only been stable, it has even risen in some cases.

More importantly, this leads us to understand the importance of communication. No matter what the situation, the exchange of information is what will keep us, and the world, going. Ultimately, it is the one thing that will, without a doubt, shape the future.

How “The Algorithm” Builds Toxic Mental Health Echo Chambers

CW: mental health, suicide, eating disorders

If you’re anything like me, you have somewhat of a love-hate relationship with “the algorithm”. On the one hand, I get shown content of the variety I’m partial to on the regular. I’m into houseplants and calligraphy, and the algorithm knows that, so I rather like coming across aesthetically pleasing calligraphy videos on YouTube. On the other hand, I’m a little creeped out that the algorithm knows me so well, and I know that it can serve to perpetuate harmful ideas (as discussed in Noble’s article). On a sillier level, I don’t exactly appreciate getting called out on the regular by other young adults with mental health issues on the internet. 

Actually, interacting (AKA, liking/commenting) with that last type of video can easily trigger another aspect of “the algorithm” that I’m less enthused about: the funneling of impressionable young people into misguided mental health spaces. These are online spaces (comment sections, users’ personal pages, group accounts) wherein often unqualified young adults and teens discuss mental health. Users will make videos prompting others to relate to symptoms of neurodevelopmental disorders or mental illnesses, poke fun at their own mental health challenges, and sometimes glamorize the idea of being deeply unhappy— even suicidal.

Right now, this subculture is having a bit of a moment on TikTok, but it’s certainly not anything new. I’m sure my classmates remember 2012-era Eating Disorder Tumblr

A mild example of what a search of "thinspo" on Tumblr yields.

A mild example of toxic eating disorder culture on Tumblr.

I’m not trying to say that this side of the internet is all bad, though. Users often also share tips or tricks that help make daily tasks easier to accomplish, or encourage people to seek professional help if they are struggling. Other users are actual medical professionals or therapists doing their best to offer useful advice. It’s also just nice to know that you’re not alone in your problems. I know I’ve also found solace knowing I’m not the only one experiencing feelings I thought were uniquely mine to bear, or that I’m not the only one who worries about [insert silly thing].

All I’m trying to say is that, when “the algorithm” aggressively directs users to these kind of mental health spaces and subsequently feeds them often misguided and toxic information, things can quickly get ugly. Vulnerable young people have been known to develop eating disorders or pick up inadvisable coping mechanisms as a result of interacting in such online spaces. And because they continue to interact with such content, these young people can find it extremely difficult to break out of these toxic bubbles. Instead, they get stuck in this nightmarish echo chamber full of other sad teens who are just trying to feel okay in a confusing, scary world. 

It’s this echo chamber effect created by “the algorithm” that worries me most. It certainly isn’t limited to mental health discourse: social and political echo chambers exist all over the internet. Laquintano and Vee describe how “the algorithm” affected the circulation of political information ahead of the 2016 election in their article. These spaces can similarly serve to promote misguided ideologies (such as glorifying cults).

Echo chamber | Cartoons | postregister.com

A political cartoon showing a modern example of how social media creates echo chambers. Illustration by Robert Ariall on postregister.com.

Generally though, echo chambers of any kind do one thing best: they echo. They repeat the same few ideas and opinions over and over and over again. And when those ideas are harmful, bad things happen. Real-world problems start to occur, and perhaps just as importantly, young people who’ve fallen prey to this algorithmic shepherding are prevented from seeing that there are other parts of life— online and off—  that are better than this. Even beautiful. This isn’t all there is. Some things matter way more important than the circumference of your wrists.

I don’t have a solution to this shepherding problem. Do we need more content censorship so that harmful information never ends up online in the first place? Or is that an infringement upon free speech? Should we “dial back” how aggressively the algorithm picks up on browsing patterns and herds us into groups? I don’t know. But I’m confident that we could all benefit from stepping outside our online bubbles, even if we don’t think we’re in a harmful or hateful space. Perspective is key: your slice of the internet is never all there is. The internet can be a tool for good, if we use it that way.

Fear the Bots…Or Not

Line drawing of connected dots made to look like a human reaching out with the letters "AI" on the palm of its handIn 2014, Stephen Hawking gravely warned against creating Artificial Intelligence (A.I.) devices that could match or surpass human abilities. Hawking’s fears are not unique or new – but are they warranted? Could A.I. ever really replace a living, breathing person?

The short answer is “maybe.” As technology advances, use of A.I. will likely continue to expand across all industries. In classrooms, bots can be used to grade papers, thus potentially freeing up instructors to spend more time with students. Outside of the classroom, students might try to use a bot to write a paper for them. A.I. even beat contestants on Jeopardy!

Personally, it is a little terrifying to consider all of the different ways that A.I. might take over human thought processes. At what point will our world start to look like a real life version of Ex Machina or i,Robot?

The reality is that A.I. is still relatively young in the grand scheme of technological advances. While it is true that A.I. has advanced to mimic human thought processes such as those described above, there are massive limitations in what A.I. can do.

In 2019, an A.I. device, Project Debater, went head-to-head with a human economic consultant to debate whether or not preschools should be subsidized by the public. While Project Debater had all of the same facts and figures as its human opponent, the machine was not able to argue successfully. Multi-Colored Mechanical Gears in the Shape of a Human Brain

A.I. devices mirror humans when it comes to logic and facts. But when it comes to abstract concepts and rhetorical persuasion, A.I. can’t compete. And according to some experts, it never will. Abstract ideas are not easily replicable and often don’t conform to any set patterns or rules, making them nearly impossible to create in the form of a machine. Similarly, the art of rhetorical persuasion requires a certain emotion to be conveyed from speaker or writer to the intended audience.

So, put the fears aside. While A.I. will continue to advance at the simple stuff, it will not be able to replace the core of what makes humans human.

An American Sense of Reality

“To watch the TV screen for any length of time is to learn some really frightening things about the American sense of reality. We are cruelly trapped between what we would like to be and what we actually are.”
                                                                                              – James Baldwin


The singularly most omnipresent entity amongst the American populace is that of mass media. It leaks into every facet of our lives and defines how we perceive others and construct our own identities.

Adult reaching out to baby through phone screen

It has been proven through various studies that mass media is a powerful influence that commonly causes people to undergo an identity shift. An identity shift is defined as “choosing to change your current identity because you want to become a new person and experience a new life.”

The most susceptible group to media influence and identity shifts is adolescents. This is because the adolescent years are the most formative in identity formation for a human.

TikTok is a great example of a mass media venue that constantly encourages impressionable youths to undergo identity shifts. These identity shifts can be relatively tiny, such as a person basing more of their identity around a harmless fandom, or substantial, such as a person adopting an antagonistic language and attitude towards certain groups of people in order to mimic their favorite creator.

Kirkland & Jackson, in their work “‘We Real Cool’: Toward a Theory of Black Masculine Literacies,” offer another great example of the ability of mass media to influence adolescents’ identity. They specifically investigated the role rappers and rap media played in determining the language-in-use by “cool” African American adolescents. In specific, they traced how the group of “cool” children altered their language, social views, and clothing choices in order to align more closely with what rap media portrayed and perpetuated as cool.

Picture showing off a child's drawing that exemplifies Hip-hops's cultural influence on the way children speak

The pair also provide context on why specific mass media have a more significant influence on certain groups over others. In their study’s case, African American children formulated their “cool talk” and identities around African American rap artists and media because the community they inhabited deemed said rap artists as representative of what a “cool black man” and/or “black masculine cultural model” is.

I think moving forward as a society that it will become more and more important to encourage persons to distance themselves from media consistently in order to allow themselves the ability to maintain and reinforce their own personally constructed identity separate from overpowering external influences. Otherwise, I think that events such as the recent uptick in white supremacists specifically targeting racist media at adolescent boys in the hopes they will form their identity around normalized racism will become much more commonplace.

Your Attorney is a Robot

Like many other industries, artificial intelligence technology is slowly becoming an existential threat for many young professionals attempting to break into the legal sector. This is because AI is taking over many of the lower-level tasks historically assigned to junior attorneys and legal assistants and performing them in a fraction of the time.

Robot creating a hologram of a balance (representative of the legal field)

AI has taken over research, litigation forecasting, legal analytics analysis, documents automation, and electronic billing in law firms ranging from small to gigantic throughout the United States. The most devastating of these takeovers is document automation. Writing work that once required a team of junior attorneys to finish in a week has been taken over by writing bots that can complete the same work in minutes.

Obviously, such a dramatic increase in efficiency has caused law firms to find buying a legal writing bot software package and hiring a single junior attorney to supervise its writings much more attractive than hiring and training a whole team of junior attorneys to perform the same work. A depressing fact for the swarms of law students attempting to obtain internships during law school and the graduates trying to start their actual careers.

Robotic attorney

It is worth noting that just as McKee & Porter recommend in their article “The Impact of AI on Writing and Writing Instruction,” law professors are actively reacting to the technology and have begun to instruct their students on leveraging and working alongside legal AI and writing bots.

For example, Harvard Law School has already started to offer “legal innovation and programming” courses. Hopefully, this proactiveness on the part of legal academics will soften the blow of the shift to legal AI integration by law firms and prevent future attorneys from being left in the dust by the technology.

Gif of a scene from "Legally Blonde" saying "Girls, I'm going to Harvard!"

The technology is not all doom & gloom though, as it does hold genuine benefits for the field of law. In a profession centered around billable hours for charging clients, the ability for legal AI to cut week-long tasks down to minutes allows for law firms to become much more affordable and therefore accessible to the “everyman.”

Overall, Legal AI is a multifaceted issue since it is both a tremendously beneficial technology and a severely disruptive one. On the one hand, it will benefit the workflow of many law firms and improve the process of law itself. On the other, the technology is guaranteed to allow law firms to cut down on employees and make it even harder for young legal professionals to break into the already very competitive legal job market. If “Legally Blonde” ever gets a sci-fi remake, it’ll for sure have to include a plotline about dastardly legal writing bots and their desire to replace so many poor junior attorneys.

Vaccinations, Public Health Rhetoric, and Snapchat Stories: How Online Writing has Affected Vaccination Efforts

If you live in Ohio and are currently located in the Columbus area, you may know the struggle of getting a COVID vaccine. Just look at the map below to see how the distribution of appointment unavailability is concentrated in Columbus. Compared to other large population centers in Ohio, Columbus is by far experiencing the most shortages. Even with places like the Schottenstein center having delivered over 79,000 vaccines, the demand for vaccination in the Columbus area is higher than the supply.

A map of ohio that highlights vaccine availability

But why is this? As discussed in Week 12, the pandemic has caused an increase in coalitions and relational literacies in regards to health on this specific issue. We in Columbus, especially those who attend OSU, are lucky to have a close relationship to health information via the Wexner Medical Center. They provide so much information on health and wellness, and many students and alum value their writing greatly.

 

With the integration of health information so strong in the Columbus community, it seems to me there has been an even greater response to vaccination. Even in my personal communities, everyone I know is actively trying to book a vaccine appointment or has booked an appointment. We, as OSU students and community members, are more aware of how important getting vaccinated as quickly as possible is, and we, therefore, have a much higher demand for vaccine appointments.

 

This increase in availability may be more of a reflection of trying to support rural and minority communities, who are especially vulnerable to COVID-19 outbreaks. This increase in appointment availability will hopefully help these rural communities get over some of the vaccine hesitancy presents there. Unlike in Columbus, many people in rural communities may not be able or willing to take a vaccine appointment time that interferes with their work or life schedule. There will also need to be a bigger push of public health writing and rhetoric to decrease vaccine hesitancy, and it will need to target the specific fears and hangups each community has.

 

Unfortunately, this lack of appointments means people like me, a 22-year-old college student, have a much harder time accessing vaccines. We want to be able to celebrate our graduations safely, but this means we need to be vaccinated within the next week if it’s not already too late. But fortunately, social media has helped many people access the vaccine in an alternative way. While you shouldn’t be posting your vaccine card, posting about getting a vaccine is a way of sharing support for a public health issue. It also makes others aware of how they can get vaccinated. I personally found access to a vaccine through social media. While it may be used to push anti-vax rhetoric, social media also has the power to get us back to normal even faster.

The Internet and Democracy

When the World Wide Web was invented in 1989, an excitement gripped the United States as the internet went from a system primarily used by scientists to a system that could one day connect the world. Connection didn’t happen immediately. It took several years before email or the internet were widely available to everyone, and even then, there were barriers to access.

Thirty years later, most Americans walk around with access to the world’s information in their back pocket in the form of smartphones. But how connected is society? Is the internet actually driving us further apart?

Laptop Computer with Image of American Flag Covered in Code on ScreenIn the wake of last November’s contentious Presidential election, these questions have been at the forefront of mass media. Recently, an article in The Atlantic asked if democracy itself is at risk of failing as a result of partisanship on the World Wide Web. The internet is an open system that allows anyone to post nearly anything at any time. On the one hand, this open source format allows Americans to exercise their First Amendment rights in a manner the Founding Fathers could never have imagined.

But this open system also allows bad actors a platform that was not available to them in a world where print ruled. One example of this is the use of algorithms that “control” what a person might see on their social media feeds. In their article, “How Automated Writing Systems Affect the Circulation of Political Information Online,” Timothy Laquintano and Annette Vee mention a “Red Feed, Blue Feed” graphic created by the Wall Street Journal in which one can see the difference between a conservative or liberal Facebook feed “on a variety of issues,” thus highlighting the potential polarizing effect of the Web.

Algorithms, bots, and the Google search engine can all give the effect that humanity is not really in control of the internet at all. Fortunately, as The Atlantic article mentions, there are pockets of the internet that are so far free from manipulation, such as Wikipedia.

The question for American society is how to reclaim the parts of the internet controlled by trolls, bots, and corporations. In order to regain connection over division, tough decisions will need to be made on how to govern the internet without impinging on the rights of citizens. It won’t be easy, but it is necessary for the future of democracy.