Commodification of Black Identity

Emojis, memes, and reaction gifs are a form of writing—maybe a bizarre opinion, but it’s a hill I’m willing to die on.

And no doubt, writing is a powerful tool. Certainly it’s a factor in determining and perpetuating both stereotypes and power dynamics. Does this mean, then, that memes (and emojis and reaction gifs and various online behaviors) aren’t just harmless jokes but actually powerful tools for easily spreading dis/information, opinions, and ideas?

Tweet saying, “Is digital blackface a form of policing our freedom of expression? Or does it perpetuate harmful stereotypes?”

Well, yeah.

(If you need more convincing, then don’t miss my upcoming post on memetic warfare.)

Now that we’ve laid some groundwork, I’d like to talk about Digital Blackface

You know, when non-Black people use digital spaces to “try on” Black identity. Sometimes people make entire social media profiles; sometimes it’s as seemingly-innocuous as sending a reaction gif. In both cases, Blackness—as in, Black social identity and culture—is performed in an exaggerated, often harmful and stereotypical way.

I won’t rehash the many, many arguments that have been put out there. More appropriate, more experienced people have tackled this subject. So, especially if this is unfamiliar territory, I definitely recommend you check out their articles. (After you’re done here of course!) 

What I do want to do is offer another layer to the conversation. 

Tweet from @BriannaABaker saying, “Why is it that when a Black man expresses emotional vulnerability, he’s made into a meme?? #DigitalBlackface” with Google search results for “crying meme” underneath.

Identity commodification is damaging yet simultaneously unseen and ubiquitous. Safiya Noble shows us in her article on Google’s search engine algorithms why this is such a serious problem:

“Black girls are sexualized or pornified in half (50%) of the first ten results on the keyword search “Black girls”… What these results point to is the commodified nature of Black women’s bodies on the web—and the little agency that Black female children (girls) have had in securing non-pornified narratives and ideations about their identities” 

Both Digital Blackface and Google’s search engine results commodify Blackness. They give control to non-Black people over the construction of Black identity. I’d say that’s a pretty big deal.

How “The Algorithm” Builds Toxic Mental Health Echo Chambers

CW: mental health, suicide, eating disorders

If you’re anything like me, you have somewhat of a love-hate relationship with “the algorithm”. On the one hand, I get shown content of the variety I’m partial to on the regular. I’m into houseplants and calligraphy, and the algorithm knows that, so I rather like coming across aesthetically pleasing calligraphy videos on YouTube. On the other hand, I’m a little creeped out that the algorithm knows me so well, and I know that it can serve to perpetuate harmful ideas (as discussed in Noble’s article). On a sillier level, I don’t exactly appreciate getting called out on the regular by other young adults with mental health issues on the internet. 

Actually, interacting (AKA, liking/commenting) with that last type of video can easily trigger another aspect of “the algorithm” that I’m less enthused about: the funneling of impressionable young people into misguided mental health spaces. These are online spaces (comment sections, users’ personal pages, group accounts) wherein often unqualified young adults and teens discuss mental health. Users will make videos prompting others to relate to symptoms of neurodevelopmental disorders or mental illnesses, poke fun at their own mental health challenges, and sometimes glamorize the idea of being deeply unhappy— even suicidal.

Right now, this subculture is having a bit of a moment on TikTok, but it’s certainly not anything new. I’m sure my classmates remember 2012-era Eating Disorder Tumblr

A mild example of what a search of "thinspo" on Tumblr yields.

A mild example of toxic eating disorder culture on Tumblr.

I’m not trying to say that this side of the internet is all bad, though. Users often also share tips or tricks that help make daily tasks easier to accomplish, or encourage people to seek professional help if they are struggling. Other users are actual medical professionals or therapists doing their best to offer useful advice. It’s also just nice to know that you’re not alone in your problems. I know I’ve also found solace knowing I’m not the only one experiencing feelings I thought were uniquely mine to bear, or that I’m not the only one who worries about [insert silly thing].

All I’m trying to say is that, when “the algorithm” aggressively directs users to these kind of mental health spaces and subsequently feeds them often misguided and toxic information, things can quickly get ugly. Vulnerable young people have been known to develop eating disorders or pick up inadvisable coping mechanisms as a result of interacting in such online spaces. And because they continue to interact with such content, these young people can find it extremely difficult to break out of these toxic bubbles. Instead, they get stuck in this nightmarish echo chamber full of other sad teens who are just trying to feel okay in a confusing, scary world. 

It’s this echo chamber effect created by “the algorithm” that worries me most. It certainly isn’t limited to mental health discourse: social and political echo chambers exist all over the internet. Laquintano and Vee describe how “the algorithm” affected the circulation of political information ahead of the 2016 election in their article. These spaces can similarly serve to promote misguided ideologies (such as glorifying cults).

Echo chamber | Cartoons | postregister.com

A political cartoon showing a modern example of how social media creates echo chambers. Illustration by Robert Ariall on postregister.com.

Generally though, echo chambers of any kind do one thing best: they echo. They repeat the same few ideas and opinions over and over and over again. And when those ideas are harmful, bad things happen. Real-world problems start to occur, and perhaps just as importantly, young people who’ve fallen prey to this algorithmic shepherding are prevented from seeing that there are other parts of life— online and off—  that are better than this. Even beautiful. This isn’t all there is. Some things matter way more important than the circumference of your wrists.

I don’t have a solution to this shepherding problem. Do we need more content censorship so that harmful information never ends up online in the first place? Or is that an infringement upon free speech? Should we “dial back” how aggressively the algorithm picks up on browsing patterns and herds us into groups? I don’t know. But I’m confident that we could all benefit from stepping outside our online bubbles, even if we don’t think we’re in a harmful or hateful space. Perspective is key: your slice of the internet is never all there is. The internet can be a tool for good, if we use it that way.

Vaccinations, Public Health Rhetoric, and Snapchat Stories: How Online Writing has Affected Vaccination Efforts

If you live in Ohio and are currently located in the Columbus area, you may know the struggle of getting a COVID vaccine. Just look at the map below to see how the distribution of appointment unavailability is concentrated in Columbus. Compared to other large population centers in Ohio, Columbus is by far experiencing the most shortages. Even with places like the Schottenstein center having delivered over 79,000 vaccines, the demand for vaccination in the Columbus area is higher than the supply.

A map of ohio that highlights vaccine availability

But why is this? As discussed in Week 12, the pandemic has caused an increase in coalitions and relational literacies in regards to health on this specific issue. We in Columbus, especially those who attend OSU, are lucky to have a close relationship to health information via the Wexner Medical Center. They provide so much information on health and wellness, and many students and alum value their writing greatly.

 

With the integration of health information so strong in the Columbus community, it seems to me there has been an even greater response to vaccination. Even in my personal communities, everyone I know is actively trying to book a vaccine appointment or has booked an appointment. We, as OSU students and community members, are more aware of how important getting vaccinated as quickly as possible is, and we, therefore, have a much higher demand for vaccine appointments.

 

This increase in availability may be more of a reflection of trying to support rural and minority communities, who are especially vulnerable to COVID-19 outbreaks. This increase in appointment availability will hopefully help these rural communities get over some of the vaccine hesitancy presents there. Unlike in Columbus, many people in rural communities may not be able or willing to take a vaccine appointment time that interferes with their work or life schedule. There will also need to be a bigger push of public health writing and rhetoric to decrease vaccine hesitancy, and it will need to target the specific fears and hangups each community has.

 

Unfortunately, this lack of appointments means people like me, a 22-year-old college student, have a much harder time accessing vaccines. We want to be able to celebrate our graduations safely, but this means we need to be vaccinated within the next week if it’s not already too late. But fortunately, social media has helped many people access the vaccine in an alternative way. While you shouldn’t be posting your vaccine card, posting about getting a vaccine is a way of sharing support for a public health issue. It also makes others aware of how they can get vaccinated. I personally found access to a vaccine through social media. While it may be used to push anti-vax rhetoric, social media also has the power to get us back to normal even faster.

Let’s be genuine about how we (re)write histories

When I say AAC device what comes to mind? Is it the device? The technology? …Or the human who uses it?

Okay, I admit that was a bit unfair of me. I mean, I literally put “device” right in the question; it’s only natural for you to have thought of that first. But the point I’m making here is poignant nonetheless: that sometimes even the companies who design, make, and sell these pricey communication technologies think about the person second—or third or fourth or last—to the device.

Don’t just take my word for it. Meryl Alper draws attention to this issue in her think piece on the development of AAC devices. Notably, she says,

“[T]he history of AAC sheds light on the inexorable, but understudied links between the history of communication technologies and disability history… Individuals with various disabilities need to be recovered from and rewritten into the history of how communication technologies are designed, marketed, and adopted.”

When she says technologies, she refers to all means of writing and communicating. Not just the ones that are augmentative or alternative.

And when she says history she’s not just talking about days of yore. This is salient—this is now.

But this issue obviously isn’t isolated to AAC. Companies, researchers, and consumers consistently and persistently “forget” the humans behind the technologies.

For anyone who’s accidentally spent hours perusing #wokewashing, you know how real and fraught this product-over-person mentality is. (And just to address the students and teachers real quick, these issues crop up even in the classroom. Check this article out.)

Putting the human back into anything that has been systematically scrubbed of specific people’s presence is no easy job. So how do we go about “recovering” and “rewriting” like Alper suggests?

Feuding and Feminism: The Hidden Lives of Virtual Assistants

In our modern times, virtual assistants such as Alexa, Siri, and Google assistant are ubiquitous. Beyond the basic concerns of surveillance many people have, should society also be worried about our machines reflecting the worst parts of humanity?

The British publication, The Independent, recently published an article suggesting that Google Assistant might be subtly casting shade on Apple’s Siri. It’s not surprising that the developers of one app might program their algorithms to respond to questions about their competitors in a less than flattering light. But is it really necessary to equate “rats” with “Siri?”

In “Asking More of Siri and Alexa: Feminine Persona in Service of Surveillance Capitalism,” Heather Woods explores the idea of Siri and Alexa as electronic iterations of female stereotypes. Reading about a potential feud between virtual assistants begs the question of whether this feminization of inanimate objects has gone too far. After all, feuding females is not a new stereotype as the many iterations of the Real Housewives of… television franchise can attest.

Virtual assistants, at their core, were designed to enable us to gain back time in our busy daily lives. In many ways, they have achieved this goal. Who doesn’t love being able to ask Siri to add eggs to the grocery list while simultaneously completing household chores? When used for these purposes, virtual assistants are a godsend for millions of people.

But at some point, society will need to grapple with whether or not the darker aspects of these virtual assistants are worth the convenience to our everyday lives. Do we really need Google Assistant to tell us how annoying Siri can be?

Wow, this is so sad…

In 2018, digital rhetoric scholar Dr. Heather Suzanne Woods wrote a scathing article on our misplaced trust in artificial intelligence virtual assistants. Paradoxically, the title, “Asking more of Siri and Alexa,” reflects the opposite of what the article suggests. And she is far from alone in concluding that we need to be wiser about asking anything of technologies such as these– the nodes that comprise the Internet of Things, while marketed as product-service hybrids intended to make things easier, more often add complexity to already complicated lives, making people anxious, overloaded, and unable to cope with an excess of data. But it seems humanity might have bigger fish to fry than our chronic information pathology.

Dr. Shoshana Zuboff keys us into these concerns in her book, Surveillance Capitalism, the ultimate claim of which is that by continuing to engage in technophilia, or if not technophilia then techno-ambivalence, society is slipping toward a weakening of autonomy, privacy, and individual decision-making powerful enough to threaten our most prized institution: democracy.

Meme of an anime man labeled Silcon Valley, gesturing to a butterfly labeled

Surveillance capitalism, like all other forms of capitalism, evolved by claiming something not yet a part of the market dynamic. Older forms of capitalism claimed natural resources, land, and labor as commodities to be sold and repurposed. Surveillance capitalism claims data, but not just any data. The data interests of surveillance capitalism lie in the private lives of technology users: what do people say, to whom, and how? where do they go, with whom, and how? what do they buy, for what, and how? This raw data can then be transformed into metadata profiles used to super-target individuals, nudging people toward actions that serve commercial interests.

The common refrain of “I’d rather see something I’m interested in” is enough to assuage the creeping anxiety of most while scrolling Instagram, ignoring pointed ads of a product mentioned in passing to a friend or partner. But this surplus accumulation of data goes far beyond nudging users toward purchases. In 2016, Cambridge Analytica used this same metadata to make political predictions about people. In an age of smart cities, ubiquitous computing, and quantified selves, digital platforms become the new battlegrounds of our most pressing battles: freedom from government surveillance, freedom of speech, racial justice, labor relations, and safety from bad actors. When Google can set paid customer lures via Pokémon Go to modify shopping behaviors, when advertisement becomes propaganda, when the digital is instrumentalized to the purpose of instrumentalizing people, that is how democracy finds itself in peril. When all it takes is money to buy the data and the algorithm, new poverties of information emerge that in many ways will enforce and supersede those of economics. Too much trust is put into large technological organizations to protect the data of millions, the same kind of trust generally reserved for fiduciary relationships.

“[Surveillance capitalism] substitutes computation for politics, so it’s post-democracy,” says Zuboff. “It substitutes populations for societies, statistics for citizens, and computation for politics.”

…Alexa, play Despacito.