I wish that I could be like the Cool Kids

In a group of friends, everyone usually plays a certain role. Whether they are the player in the group, the comedian, the lover boy, or the quiet introvert that surprisingly gets really “turnt” on the weekend, they all express themselves with one another with sayings, memories, and factors that resonate within each of them.

And with the rise of coined terms from social media such as TikTok and Twitter, the fun has just begun. Terms like “Bussin” and “Sheesh” or poses, facial expressions all resonate within these groups and make it enjoyable and share great memories.

Sheesh Meme - Tiktok Sound - YouTube

Now while this may not be the case for every friend group out there as some may find this cringe, they all still share experiences that make them the friends they are today.

It all boils down to understanding language, tone, and identity. Being able to share these experiences is usually because we can reflect upon them. At the very least we share something in common.

The same can be said about literacy in a social and cultural setting. Our mannerisms may be a result of something we read or watched. Have you ever watched a show, finished it, and found yourself subconsciously becoming someone within the show?

The same thing can be said about literacy and David E. Kirkland of NYU and writer of “We Real Cool”: Toward a Theory of Black Masculine Literacies had this to say.

“Words like “dog,” for example, were frequently used among the cool kids as terms of endearment. Such terms were also used as affirmations of coolness, reserved for those young black men who, according to the cool kids, were “down,” a word they used to signify allegiance.”

These terms make us feel comfortable and help us understand one another better. These terms of endearment show us our upbringing, unity and friendship

Wait…. you said who wrote this?!

Technology over the last 50 years has become very advanced. From 2 gigabytes storage being a milestone to a terabyte hard drive not being enough data in some cases. While these advancements have assisted in many fields such as medicine, programming and  the like, it has slowly eased its way into academia and particularly, writing.

Centers of Technology: The Future Is Now

The entertainment industry has shown us nothing but dystopian results to a future where AI has been integrated into society such as “Black Mirror” and “Love, Death and Robots“. It feels uneasy and unerving at time, that while we watch these shows for entertainment, could there be some truth in the manner?

Now of course we have the common AI’s that are seen as convenient like Siri and her sisters Alexa and Cortana. But within that realm, they are nowhere near the level of the complex AI’S being made that could essentially replace human tasks.

In the world of Academia, writing bots and AI based teachers are starting to become replacements of things humans do. According to “The Impact on Writng and Writing Instruction” by Heidi Mckee and Jim Porter, covers this topic in depth.

“For example, x.ai’s personal assistant scheduling bot, Amy/Andrew Ingram, in the context of email about meetings, is very often mistaken as human. In fact, some email correspondents have even flirted with Amy Ingram and sent her flowers and chocolatesSome poetry writing bots are already informally passing the Turing Test. ”

This is just one of the many examples where AI is becoming so real, it could get hard to differentiate between human text.

In an article written by a super AI called GPT-3 ,we see a small sample of just how complex AI writers can be.

What's the world's fastest supercomputer used for? | HowStuffWorks

“The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.”

“For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.”

This writing does seem a little odd because it feels as though a human wrote this in complete irony, and if an AI did write this it would be offputting as well. But the very fact that one may not sold on the idea of this being written by an AI just goes to prove the point of just how advanced these systems are becoming.

This then brings up the ultimate question of its impact on literacy in the sense that how would we effectively embed and intergrate these systems into this field without taking away from the knowledge and creativity humans offer?

There isn’t necessarily a fear of an uprising of robots in the future, but we should take precautionary measures into truly understanding the best way to make these systems work hand-in-hand and not for us.

 

 

 

Rumination as a Consequence of Literacy

My husband and brothers-in-law are in an album club hosted over Discord. Over the course of two-ish weeks, they each listen to an album nearly ten times, then write up a few surprisingly formal paragraphs about the material.

At first, I wanted to participate in the fun. Like most people, I consider myself a music-lover, I think I have good taste, and I also struggle to believe that people will continue to survive without hearing my very important opinions on everything. But the idea of listening to the same album seven, eight, nine, or more times over the course of fourteen days didn’t sit well with me. Part of me wasn’t prepared to give that kind of focus to something, but I also felt ill-equipped because I am still processing the work of artists I started listening to years ago. If after who knows how many listens I still don’t know quite how I feel about Rumours (though, generally: yes, very good… except what happened with Oh, Daddy?), how can I expect myself to have anything to say about an album that is still so new to me?

Image of Squidward from Spongebob Squarepants with bloodshot eyes looking perturbed. Text above the image reads I am an iterative thinker (or a ruminator, if my mother is to be believed). I’m also an iterative reader, listener, writer. Recursion continues to show up in my life in many ways, and I am still learning to suspend my anxiety about it (a semester of Data Structures and Algorithms is enough to make anyone’s palms sweat at the thought).

I am always coming back, reviewing, adding, taking away. Revision is the sine qua non of my existence. I feel like I could toy with one idea forever, if only there weren’t so many others. Sometimes this abiding curiosity feels like a vigil: sitting, waiting, watching. Usually I think what you’re watching for is most important. What you find changes who you are.

But, in a larger sense, this rumination is one of the consequences of literacy that Goody & Watt discussed in their paper. A study conducted by the University of Michigan found that 73% of people between the ages of 25 and 35 are chronic over-thinkers. While this statistic falls with age, it remains significant: as creatures, we are obsessive. Maybe this is because reading and writing gives us the opportunity to revisit ideas constantly. Our shifting understanding of time and history permits us to linger, and then walk away… and then come back. At the same time, we are prone to revisionist histories in the same way that the oral cultures we discussed are: we change our minds with new information, according to our needs. Our ideas don’t stay static. We struggle to agree on facts, even though we each believe that facts exist. This revision, too, has been made a literary process. The second reading of a text is different from the first.

Malcolm X described his literacy as freedom. I wonder if he found a sweet spot between illiteracy and rumination.

Google: Friend or Foe?

How goes it everyone?

I know we all love Google and the convenience it provides. Heck, Google is so deeply ingrained in our daily lives and habits that the brand name itself has become a verb. Don’t know something? “Google it,” we all say.

Google

Google

But what if I told you that Google might be making you dumber? Nicholas Carr wrote an excellent article on this subject if you want to dive a little deeper than my brief commentary, but the basic gist is that having all of the world’s information (and misinformation) at our fingertips at the push of a button is making us impatient, shortening our attention spans, and diminishing our capacity for critical thought. You know how you often opt to read short, quick, easy blog posts like this or watch a short video on a topic in lieu of reading a research paper?

I know that I do this (don’t feel guilty!). That’s an example of what we’re talking about here. All of this technology and convenience (or rather our growing reliance on it) is making us lazy! Do you have trouble finding most places without using a GPS (I know I do!)? It’s making us incapable of performing simple tasks for ourselves like reading a map or simply remembering where things are. This is an issue that growing numbers of experts are starting to sound the alarm about.

In summary, be careful about how much your using Google and other convenient tech. Make sure to keep your brain active so you don’t lose it!

Peace!

The Kids Are Alt-Right

Whether you’re into crafts and DIY, boybands, gaming, or grilling, chances are you’ve watched a YouTube video about it before. YouTube is a video-sharing platform and the second largest search engine behind Google Search. Users watch over a billion hours of content on the site every day.

This post from our course blog discusses a growing issue on social media platforms–The Algorithm. Clicks = Ad $$ and algorithms reflect that. The echo chamber, or filter bubble, or whatever you want to call it, that is born from aggressive algorithms can be dangerous. Once you engage with certain content, similar content starts popping up more, and users are recommended increasingly extreme content.

Safiya Noble’s “Google Search” interrogates the algorithmic practices of biasing information through search engine results, specifically concerning how Black women and girls are rendered online. Noble  states an ugly truth: “…search engine technology replicates and instantiates derogatory notions.”

Search results for the word “feminist” in YouTube Search.

TikTok-ers have recently been posting about such a phenomenon on YouTube, particularly affecting teenage boys, known as the “Alt-Right Pipeline.”

PewDiePie, a gaming channel, has been known as an entry to falling down the alt-right rabbit hole. “Edgy humor” becomes increasingly blurred with hate speech, and compilations of SJW/Feminist/whoever gets destroyed/owned/whatever becomes all you see. These subcultures are fed by content creators that promote each other and their other social media platforms. In an extreme instance, a shooter live-streamed his attack on a mosque and told viewers, “Remember lads, subscribe to PewDiePie.” In the past 4 years, alt-right groups have grown emboldened by support from former President Donald Trump.

The rise of the alt-right is both a continuation of a centuries-old dimension of racism in the U.S. and part of an emerging media ecosystem powered by algorithms.
Going through an “Alt-right phase” isn’t quirky or relatable. Interacting with these ideologies has real-life dangerous consequences.
In an effort to engage users as much as possible, we are left with the consequences of algorithms gone wild. Companies need to be more transparent about their algorithms, and actively work to improve them to be anti-racist. Additionally, we need to examine more closely the relationship entertainment and education have online. As we click, and click, and click, companies lead us down extremist rabbit holes, and profit all the while.

The Chauvinists

The big news is the conviction of Chauvin. That’s what everyone’s talking about. That’s what has been driving journalists, bloggers, and everyone with a keyboard to… well, their keyboards. It’s good news. Sort of. A man is dead, and 64 people have died at the hands of the police during the trial proceedings. I visited my mom before the verdict, and on my way out, she casually joked about possible rioting after the reading. She mentioned that the statehouse had actually put up barriers around their perimeter. That they felt the need to set up protection for the duration of the trial. It seems the only sane option was a guilty verdict. Now, the world is awaiting the sentence, and in the meantime it feels like hundreds of articles are being churned out, both for and against the decision of the jury. How much of it is produced by bots? How much of it is inflated politicking or trolling or some other form of monetization?

The news seems more and more to be a function of generating clicks more than expanding a population’s understanding of their world. There were literally over 100 articles written within the last 24 hours that mention Chauvin. And the way in which we get our news, even, is now produced so heavily by algorithms and marketing measures that I can’t help but wonder how much Chauvin’s death put into the pockets of people telling the “news.” I don’t know if monetization is the name of the game internationally, but one of the top watched broadcasts, Fox, is defending their biggest earner by stating he can’t be taken seriously. Access to information has never been more abundant, and the visibility of information has never been more obscure. How do you choose what you should know? Most of the research I do is through Google Scholar, the sourcing at the bottom of wikipedia pages, and asking the nearest expert willing to talk to me. Information is easier to find and harder to parse, and it leaves me feeling like every tragedy, celebration, and occurence in our lives is reducible to a dollar amount.

Bots to Bring Doom to Democracy or a New Song for the Same Old Dance?

20 million active Twitter accounts are fake. 20 million opinions, retweets and participants in political movements are fake. That is according to only one article, it seems likely that there are many more. According to the scholars Laquintano and Vee between ¼ and 1/3 of users in support of Donald Trump or Hillary Clinton were fake bots as of 2016.

Example of an Obvios Bot Tweet

Example of an Obvios Bot Tweet

So, the simple conclusion is truth is doomed, right? How can the millions who receive their news and political opinions from the “unbiased and democratic” Twitter expect to make informed voting decisions if they are not actually engaging in civil discourse but capitalistic vote manipulation through conversation with Twitter bots?

Well, the truth is the cards have been stacked against voters. Capitalism has always had a heavy hand in politics. From the very beginning voting itself was restricted to those who had a large economic stake in America: white male landowners. As voting rights expanded more and more methods of influencing less affluent voters developed. The most obvious is advertisements from newspapers to radio ads or large donations to politicians’ campaigns.

Political Advertising During the Great Depression

Political Advertising During the Great Depression

Today, however, this manipulation is more subtle than ever. According to Laquintano and Vee many of the fake bots used to sway political opinions do so by being able to pass a Turing test or through their sheer numbers. In other words, bots can pass as real humans as determined by unknowing humans. If a bot is discoverable often there are so many of these bots their discovery is inconsequential to their movement.

So, manipulation has always existed in American democracy. The only difference is now it is not obvious where it is coming from. For example, further subtle manipulation in politics may be vote counting itself, as many sources indicate Russian tampering with the 2020 Presidential election vote counts.

90% of news outlets are owned by just six companies. If anyone remembers the play Newsies, it will simply take collusion between those six firms for major social justice issues or pieces of news to be ignored and unnoticed by Americans. Not to mention if these firms ever decided to cover an issue in a certain manner to sway votes, they could entirely sway the views of most Americans.

Joseph Pulitzer: Villian of Newsies Who Colluded With Other Newspapers to Stop the Newsies' Strike

Joseph Pulitzer: Villian of Newsies Who Colluded With Other Newspapers to Stop the Newsies’ Strike

So, that about covers it. News outlets, social media, and even voting itself may be shot. Undoubtedly historians who will study political literature in the future will have an extremely difficult time deciding which news articles, Tweets and even social movements were entirely fashioned by capitalistic stakes in politics. So, yes, we are doomed, just as doomed as the political system always has been in America.

All this means for us is that the literature of past political movements was a bit more genuine. Today any political literature is less about fairness or equality than it is about greasing the pockets of whoever is interested in some manner we are yet to understand. So, how to stay sane? Unplug. I have never heard someone tell me reading political news has made them happy. In fact, I can say from experience only the opposite is true. As long as, America can make someone richer than us a buck, our system will work and we will have bread on the table.

Millet: Angelus

Millet: Angelus– A Couple Prays in Thanksgiving for Their Day’s Work and Harvest Through the Angelus Prayer, Evocative of the American Ethos

High Hopes for Lower Recidivism.

Yesterday was April twentieth.

That’s big news for some, and a passing joke for others. It meant deals! It meant celebration! And for me it mostly meant finals. 420, and weed in general, is a great example of a subject that inspires pictographs. Paraphernalia, coded phrases, and subtle smiles follow this habit all over our country.

Now I didn’t celebrate, personally. I’ve never been taken by the desire to pursue recreational drugs. I barely drink. But I absolutely love the passion and creativity that comes out of the ongoing debate on legalization and general use. If it were up to me, it’d be legal. Not because it’d give me a reason to partake, but because alcohol being legal is significantly worse for people, and the ban on weed mostly just enlarges the gap between the privileged and underprivileged.

Most people first encounter pot in school. Specifically, high school. By the time kids are eighteen, more than a third have tried pot, and ten percent are frequent users. They roll out of English, and roll up a blunt. And the real issue here is that punitive action is separated by color. Black and white people use at roughly the same rate, but between 2001 and 2010, blacks had nearly four times the arrest rate for possession

And when you incorporate school discipline into the equation, I’m left wondering how much marijuana is used in the continuation of the school to prison pipeline? Kids are more likely to use again when suspended from school for using to begin with.

Automated misinformation

In “How Automated Writing Systems Affect the Circulation of Political Information Online,” Timothy Laquintano and Annette Vee survey the online ecosystem of “fake news.” Writing in 2017, Laquintano and Vee concentrate on how fake news affected discourse surrounding the 2016 US presidential election. The authors’ concern for misinformation driven by automated systems of writing might have predicted the horrible events at the US Capitol on January 6, 2021.

After Trump supporters violently stormed the US capitol building on January 6, ten social media platforms temporarily or permanently banned accounts owned by the former president. Twitter responded to the permanent suspension of @realDonaldTrump saying, “we have permanently suspended the account due to the risk of further incitement of violence.”

Since then, C.E.O.s of giant tech companies, like Facebook, Twitter, and Google, are facing pressure from lawmakers and the public about their responsibility in mediating misinformation.

The Chief Executive Officers of Alphabet, Facebook, and Twitter testify virtually to congress

Sundar Pichai (Alphabet/Google), Mark Zuckerberg (Facebook), and Jack Dorsey (Twitter) virtually testify to congress

Currently, these companies are shielded from liability of what’s posted on their platforms by Section 230 of the Communications Decency Act of 1996. Section 230—which was enacted before the invention of Google—protects websites from being liable for content posted by third-party users.

According to Sundar Pichai, the chief executive of Alphabet, “Without Section 230, platforms would either over-filter content or not be able to filter content at all.”

This contested editorial ecosystem is at the heart of Laquintano and Vee’s 2017 article. The authors observe a shift from human-editorial writing practices to software-based algorithms that influence how information circulates. This shift becomes problematic because social media and tech ~companies~ prioritize user engagement.

Laquintano and Vee explain that these companies profit from user engagement through algorithms that curate content to individual users in attempt to maximize their screen time.

Previously on this blog, Christa Teston observed the material conditions that enable the online spread of information. I add that algorithmic “filter bubbles” created by social media and tech companies are another factor threatening public well-being via misinformation online.

The January 6 insurrection was an overt example of the dangers of the current online writing ecology. (There are still less publicized victims of online misinformation). Accordingly, Section 230 has become a contentious piece of legislation in the US, but it seems like both sides of the aisle are open to discussing its revision—for different reasons.

Are Bots Brainwashing Us?

Hey everyone,

If you’ve been closely following politics and reading the news over the past four years, you’ve probably at least heard of bots. But what exactly are they? And why do they matter?

A bot is an autonomous program on the internet or other network that can interact with systems and users. A bot can be programmed to do all sorts of things like write tweets about specific subjects on twitter at a specific time each day. A bot network or “botnet” is a group of these bots who work in concert with each other at the behest of whoever programmed them.

Bot network

Bot network

What’s worrisome about these botnets is that they’re becoming shockingly realistic and more difficult to distinguish from real human users, and real human users are having their views influenced by these bots employed by dishonest political actors both foreign and domestic. It is widely agreed upon by reputable sources that the past two U.S. elections (the 2016 election in particular) have been heavily influenced by botnets designed to manipulate public opinion. Unwitting social media users are being bombarded with dishonest propaganda from these botnets on a daily basis.

In summary, make sure you’re getting your info from real people! If you want to know more about this subject, Timothy Laquintano and Annette Vee have done an invaluable in-depth study that it would behoove all to read.

Just a friendly heads up for all you political junkies out there. Peace!