Why Is Big Tech Policing Speech? Because the Government Isn’t
In this article, the author explores the idea of big tech with what she calls “policing speech”. I believe Big Tech has gotten so big that they are able to censor at will with no legal consequences. The problem is with Section 230 of the communications and decency act, a provision that allows online platforms which offer a public forum to avoid legal liability for posts that were from the users of the platform itself. The keyword here is platform, and one debate that has been conducted in congress is whether Big Tech should be considered platforms or publishers. A platform allows its users to post to the site and in turn, the individual user would be liable for what they post if legal action were sought out. A publisher is an entity that publishes pieces of work and media online with the ability to decide as an established publisher what gets posted and what does not with their name on it. In turn, they would be legally liable for what they post.
With this in mind, let’s take a look at the article’s analogy. “Social media sites effectively function as the public square where people debate the issues of the day. But the platforms are actually more like privately owned malls: They make and enforce rules to keep their spaces tolerable, and unlike the government, they’re not obligated to provide all the freedom of speech offered by the First Amendment. Like the bouncers at a bar, they are free to boot anyone or anything they consider disruptive.” (Bazelon) The issue with this analogy is that it implies since the company is private, they can do what they want. Without further context, this is mostly true. There is just one problem, Facebook and Twitter consider themselves platforms to have protections with Section 230, but they also conduct actions of a publisher that wants to maintain credibility by adding banners to posts they disagree with and feeling obligated to crack down on what they deem to be misinformation. As a private company they are allowed to do this in theory, but modifying posts of the public and being a gatekeeper of information is the role of a publisher, which does not have any protection from legal liability. So, if Facebook and Twitter claim to be platforms but act as publishers, why do they get to have legal protections?
A better analogy would be if the internet consisted of libraries of information. Publishers submit books to libraries with their names attached as well as their reputation and credibility being on the line. It is one thing for a librarian helping you check out a book to express an opinion about a book since it is an opinion of an individual, not the library. But, it is completely different if librarians went through the pages and took a sharpie to scratch out sentences they disagreed with or added sticky notes stating “Independent fact-checkers say this book could mislead people”. They have the right to deny publishers or individuals the ability to offer their books in the library, but it is not their place to add disclaimers or modify any page or letter of the contents of the book that is inside of that library since they are not the author or creator of any of the books that are offered inside of that library.
Now, let’s delve deeper into censorship and the banners. Censorship is defined as the suppression of speech, public communication, or other information. The keyword here is suppression. It has been argued that adding a banner can simply be a disclaimer or warning to the viewer, not censorship. I disagree because the platform having the ability to choose what they believe to be correct and adding a disclaimer to a work that is not their own. Doing this is acting as if the platform would be at risk of having their credibility be damaged is like a publisher that actually has their name on it and reputation on the line wanting to add a disclaimer to ensure it is clear that the piece with their name on it does not represent the opinions or beliefs of the entire publisher. It is not the platform’s job to warn users unless they consider themselves a publisher of the information being consumed, which they are not. By adding the banner with the intention of warning the viewer that the claims may not be accurate, it acts as a deterrent to suppress the overall viewership of the work before potential new viewers hit the play button. Also, most new viewers seeing this banner will automatically be more skeptical of the credibility of the author of the video or article before even watching the video which also makes the overall interactions with the upload to be lower which also affects how widespread the video gets recommended with the algorithm. With fewer views, likes and comments, it makes the video algorithmically reach fewer user feeds. Instead of naturally allowing the information to flow by allowing the users to do their own outside research and determine what is disinformation and what is credible by welcoming users to watch them and decide if it is worth sharing after watching it, there is an inherent obligation to be information police by doing their best to suppress who watches it if someone with the wrong view is interviewed or an unpopular opinion was shed in a positive light.
One example of suppression of speech comes from a John Stossel video called “The New Censors”. In this video, Stossel explains how Facebook added a banner that says “Missing Context. Independent fact-checkers say this information could mislead people.” below his video about climate change. This warning banner was so effective that some audience members even claimed that “Your story was so unfair, even Facebook tagged it”. 35 seconds into the video he shows a disclaimer that notifies him that the video is being seen by fewer people because of the Missing Context rating from the independent fact-checker. The reason? First, the independent fact-checking site quoted statements that were not even said in the video. Out of curiosity, Stossel got an interview with two of the reviewers of the fact-checker company Climate Feedback. Both reviewers even admitted that they did not even watch the video in question. Stefan Doerr speculated with Stossel that Facebook could have flagged the video because Stossel interviewed Michael Shellenberger who is controversial for his criticism on environmental alarmism. After Zeke Hausfather watched the video, Stossel asked whether the banner was a fair label for the video. Zeke responded “I don’t necessarily think so, while there’s plenty of debates around how much to emphasize forest management and climate change, your piece clearly discussed that both were at fault here”. Even with that being his answer during the interview, Stossel was later given an email to appeal the banner and was denied in the end and told by the reviewers he previously interviewed in a follow-up email that they now stand by Climate Feedback’s decision.
This example reminds me of the video about the danger of a single story. By being under the illusion of a single story and allowing that stereotype to define reality, it makes it difficult to fully understand the truth outside of the stereotype if the stereotype is considered the reality. Similar can be said about information if only one side of the story is presented fairly, how is it possible to have a productive conversation if one side is given an unequal advantage to be heard by individuals? Is it right to flag a video solely because an individual discussed their unpopular opinion, regardless of the context, opinion, or intention of the interview inside of the work in question?
This is what is unfair with big tech, they have been exploiting a loophole with Section 230 while the government has failed to enforce the rules in which companies have to follow to continue being considered platforms. Until the government decides to properly enforce the rules and ensure the companies protected actually act as a platform to maintain their legal protections, this exploit will allow Big Tech to have too much-unchecked power with no legal consequences. In the end, both the users of media seeking information and independent journalists willing to hear both sides of an argument, regardless of the popularity of the opinion are the victims of big tech’s power over the information that goes on the platform and how much it can spread to new viewers.
Sources: https://www.nytimes.com/2021/01/26/magazine/free-speech-tech.html
John Stossel Video: https://youtu.be/punjBhQG__s
Thank you for the post. I especially enjoyed your library analogy and completely agree with you. I think that Big Tech has gone too far in their censorship, and it makes the future scary. The actions of Big Tech are creating a precedent that will cause speech to be further limited. I also think their actions of censorship are polarizing the country further.
This is wonderful and very detailed. I love how you help us understand the suppression of speech in a different way. The examples provided really helped us understand these dangers and how it may affect people. I love the connection to the danger of a single story. I would agree and say that would be a big problem in society to function off of a stereotype. People would constantly fall short for other people and that then affects how they are looked at and treated. A single thing mentioned by someone should not define an individual. everyone should make their own conclusions and always be open for new things.