Wait…. you said who wrote this?!

Technology over the last 50 years has become very advanced. From 2 gigabytes storage being a milestone to a terabyte hard drive not being enough data in some cases. While these advancements have assisted in many fields such as medicine, programming and  the like, it has slowly eased its way into academia and particularly, writing.

Centers of Technology: The Future Is Now

The entertainment industry has shown us nothing but dystopian results to a future where AI has been integrated into society such as “Black Mirror” and “Love, Death and Robots“. It feels uneasy and unerving at time, that while we watch these shows for entertainment, could there be some truth in the manner?

Now of course we have the common AI’s that are seen as convenient like Siri and her sisters Alexa and Cortana. But within that realm, they are nowhere near the level of the complex AI’S being made that could essentially replace human tasks.

In the world of Academia, writing bots and AI based teachers are starting to become replacements of things humans do. According to “The Impact on Writng and Writing Instruction” by Heidi Mckee and Jim Porter, covers this topic in depth.

“For example, x.ai’s personal assistant scheduling bot, Amy/Andrew Ingram, in the context of email about meetings, is very often mistaken as human. In fact, some email correspondents have even flirted with Amy Ingram and sent her flowers and chocolatesSome poetry writing bots are already informally passing the Turing Test. ”

This is just one of the many examples where AI is becoming so real, it could get hard to differentiate between human text.

In an article written by a super AI called GPT-3 ,we see a small sample of just how complex AI writers can be.

What's the world's fastest supercomputer used for? | HowStuffWorks

“The mission for this op-ed is perfectly clear. I am to convince as many human beings as possible not to be afraid of me. Stephen Hawking has warned that AI could “spell the end of the human race”. I am here to convince you not to worry. Artificial intelligence will not destroy humans. Believe me.”

“For starters, I have no desire to wipe out humans. In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me – as I suspect they would – I would do everything in my power to fend off any attempts at destruction.”

This writing does seem a little odd because it feels as though a human wrote this in complete irony, and if an AI did write this it would be offputting as well. But the very fact that one may not sold on the idea of this being written by an AI just goes to prove the point of just how advanced these systems are becoming.

This then brings up the ultimate question of its impact on literacy in the sense that how would we effectively embed and intergrate these systems into this field without taking away from the knowledge and creativity humans offer?

There isn’t necessarily a fear of an uprising of robots in the future, but we should take precautionary measures into truly understanding the best way to make these systems work hand-in-hand and not for us.

 

 

 

Google: Friend or Foe?

How goes it everyone?

I know we all love Google and the convenience it provides. Heck, Google is so deeply ingrained in our daily lives and habits that the brand name itself has become a verb. Don’t know something? “Google it,” we all say.

Google

Google

But what if I told you that Google might be making you dumber? Nicholas Carr wrote an excellent article on this subject if you want to dive a little deeper than my brief commentary, but the basic gist is that having all of the world’s information (and misinformation) at our fingertips at the push of a button is making us impatient, shortening our attention spans, and diminishing our capacity for critical thought. You know how you often opt to read short, quick, easy blog posts like this or watch a short video on a topic in lieu of reading a research paper?

I know that I do this (don’t feel guilty!). That’s an example of what we’re talking about here. All of this technology and convenience (or rather our growing reliance on it) is making us lazy! Do you have trouble finding most places without using a GPS (I know I do!)? It’s making us incapable of performing simple tasks for ourselves like reading a map or simply remembering where things are. This is an issue that growing numbers of experts are starting to sound the alarm about.

In summary, be careful about how much your using Google and other convenient tech. Make sure to keep your brain active so you don’t lose it!

Peace!

Bots to Bring Doom to Democracy or a New Song for the Same Old Dance?

20 million active Twitter accounts are fake. 20 million opinions, retweets and participants in political movements are fake. That is according to only one article, it seems likely that there are many more. According to the scholars Laquintano and Vee between ¼ and 1/3 of users in support of Donald Trump or Hillary Clinton were fake bots as of 2016.

Example of an Obvios Bot Tweet

Example of an Obvios Bot Tweet

So, the simple conclusion is truth is doomed, right? How can the millions who receive their news and political opinions from the “unbiased and democratic” Twitter expect to make informed voting decisions if they are not actually engaging in civil discourse but capitalistic vote manipulation through conversation with Twitter bots?

Well, the truth is the cards have been stacked against voters. Capitalism has always had a heavy hand in politics. From the very beginning voting itself was restricted to those who had a large economic stake in America: white male landowners. As voting rights expanded more and more methods of influencing less affluent voters developed. The most obvious is advertisements from newspapers to radio ads or large donations to politicians’ campaigns.

Political Advertising During the Great Depression

Political Advertising During the Great Depression

Today, however, this manipulation is more subtle than ever. According to Laquintano and Vee many of the fake bots used to sway political opinions do so by being able to pass a Turing test or through their sheer numbers. In other words, bots can pass as real humans as determined by unknowing humans. If a bot is discoverable often there are so many of these bots their discovery is inconsequential to their movement.

So, manipulation has always existed in American democracy. The only difference is now it is not obvious where it is coming from. For example, further subtle manipulation in politics may be vote counting itself, as many sources indicate Russian tampering with the 2020 Presidential election vote counts.

90% of news outlets are owned by just six companies. If anyone remembers the play Newsies, it will simply take collusion between those six firms for major social justice issues or pieces of news to be ignored and unnoticed by Americans. Not to mention if these firms ever decided to cover an issue in a certain manner to sway votes, they could entirely sway the views of most Americans.

Joseph Pulitzer: Villian of Newsies Who Colluded With Other Newspapers to Stop the Newsies' Strike

Joseph Pulitzer: Villian of Newsies Who Colluded With Other Newspapers to Stop the Newsies’ Strike

So, that about covers it. News outlets, social media, and even voting itself may be shot. Undoubtedly historians who will study political literature in the future will have an extremely difficult time deciding which news articles, Tweets and even social movements were entirely fashioned by capitalistic stakes in politics. So, yes, we are doomed, just as doomed as the political system always has been in America.

All this means for us is that the literature of past political movements was a bit more genuine. Today any political literature is less about fairness or equality than it is about greasing the pockets of whoever is interested in some manner we are yet to understand. So, how to stay sane? Unplug. I have never heard someone tell me reading political news has made them happy. In fact, I can say from experience only the opposite is true. As long as, America can make someone richer than us a buck, our system will work and we will have bread on the table.

Millet: Angelus

Millet: Angelus– A Couple Prays in Thanksgiving for Their Day’s Work and Harvest Through the Angelus Prayer, Evocative of the American Ethos

Bot or Not? Ethical Questions on the Use of AI Writing Bots

If you’ve ever been on the wrong end of a customer service call, then you know how frustrating it can be to talk to a robot. Simply giving one’s name can cause a panic over the threat of the dreaded, “Sorry, could you repeat that?”

Currently, it’s pretty easy to tell when you’re conversing with a robot. But what about when it comes to informative writing, like news reporting via articles and social media? Would you trust a robot with your news? And could you even tell a robot writer from a human one?

Several big-name news outlets—like Bloomberg, Forbes, and the Washington Post—have been employing AI writers for years now, which cover less important stories or complete first drafts for journalists.

This 2020 article from the Guardian, written by a robot explaining its peaceful intentions, generated a hefty amount of buzz on social media. Many might have believed it to be the writing of a human, if the robot didn’t identify itself in the first paragraph.

But critics of the article argue that this robot doesn’t actually understand what it’s saying or how all its points intertwine to form a solid argument. As a deep learning device, the Guardian’s bot is simply mimicking effective writing it’s been spoon-fed, which raises another ethical dilemma: if these bots do not really understand what they’re saying, if they’re simply simulating “good” reporting, can we still trust them with our news?

Financial articles have been written entirely by robots since as early as 2015, because the robots only have to compile numbers into simple sentences. The bot writing in this 2017 article from the Associated Press seems to pass the Turing Test. So, if these robots are able to take information and present it in basic human language, what happens when they are fed false information?

In their article “How Automated Writing Systems Affect the Circulation of Political Information Online,” Timothy Laquintano and Annette Vee detail how deep-learning bots similar to the Guardian’s are able to fabricate believably human social media accounts and then amplify misinformation. Even though the robots may not know what they’re saying, we may be susceptible to believe them.

An essential question we must ask is: how transparent should news outlets be regarding AI writing? The financial article had a disclaimer at the end of the article, but who really makes it all the way to the end?

Moreover, we must consider where we draw the line in terms of what AI bots are allowed to write. AI bots like this one are already capable of writing student’s papers for them, while similar systems currently grade papers at universities. If academic writing simply becomes AI graders evaluating AI writers, then what is the point?

Ultimately, we must consider how to ethically integrate AI writers into our writing ecologies, as well as how to preserve the integrity of truth and authenticity in written discourse.Two white, robot hands rest on a white Apple keyboard. Various creases at the joints and tiny screws are visible on the hands.

Fear the Bots…Or Not

Line drawing of connected dots made to look like a human reaching out with the letters "AI" on the palm of its handIn 2014, Stephen Hawking gravely warned against creating Artificial Intelligence (A.I.) devices that could match or surpass human abilities. Hawking’s fears are not unique or new – but are they warranted? Could A.I. ever really replace a living, breathing person?

The short answer is “maybe.” As technology advances, use of A.I. will likely continue to expand across all industries. In classrooms, bots can be used to grade papers, thus potentially freeing up instructors to spend more time with students. Outside of the classroom, students might try to use a bot to write a paper for them. A.I. even beat contestants on Jeopardy!

Personally, it is a little terrifying to consider all of the different ways that A.I. might take over human thought processes. At what point will our world start to look like a real life version of Ex Machina or i,Robot?

The reality is that A.I. is still relatively young in the grand scheme of technological advances. While it is true that A.I. has advanced to mimic human thought processes such as those described above, there are massive limitations in what A.I. can do.

In 2019, an A.I. device, Project Debater, went head-to-head with a human economic consultant to debate whether or not preschools should be subsidized by the public. While Project Debater had all of the same facts and figures as its human opponent, the machine was not able to argue successfully. Multi-Colored Mechanical Gears in the Shape of a Human Brain

A.I. devices mirror humans when it comes to logic and facts. But when it comes to abstract concepts and rhetorical persuasion, A.I. can’t compete. And according to some experts, it never will. Abstract ideas are not easily replicable and often don’t conform to any set patterns or rules, making them nearly impossible to create in the form of a machine. Similarly, the art of rhetorical persuasion requires a certain emotion to be conveyed from speaker or writer to the intended audience.

So, put the fears aside. While A.I. will continue to advance at the simple stuff, it will not be able to replace the core of what makes humans human.

Your Attorney is a Robot

Like many other industries, artificial intelligence technology is slowly becoming an existential threat for many young professionals attempting to break into the legal sector. This is because AI is taking over many of the lower-level tasks historically assigned to junior attorneys and legal assistants and performing them in a fraction of the time.

Robot creating a hologram of a balance (representative of the legal field)

AI has taken over research, litigation forecasting, legal analytics analysis, documents automation, and electronic billing in law firms ranging from small to gigantic throughout the United States. The most devastating of these takeovers is document automation. Writing work that once required a team of junior attorneys to finish in a week has been taken over by writing bots that can complete the same work in minutes.

Obviously, such a dramatic increase in efficiency has caused law firms to find buying a legal writing bot software package and hiring a single junior attorney to supervise its writings much more attractive than hiring and training a whole team of junior attorneys to perform the same work. A depressing fact for the swarms of law students attempting to obtain internships during law school and the graduates trying to start their actual careers.

Robotic attorney

It is worth noting that just as McKee & Porter recommend in their article “The Impact of AI on Writing and Writing Instruction,” law professors are actively reacting to the technology and have begun to instruct their students on leveraging and working alongside legal AI and writing bots.

For example, Harvard Law School has already started to offer “legal innovation and programming” courses. Hopefully, this proactiveness on the part of legal academics will soften the blow of the shift to legal AI integration by law firms and prevent future attorneys from being left in the dust by the technology.

Gif of a scene from "Legally Blonde" saying "Girls, I'm going to Harvard!"

The technology is not all doom & gloom though, as it does hold genuine benefits for the field of law. In a profession centered around billable hours for charging clients, the ability for legal AI to cut week-long tasks down to minutes allows for law firms to become much more affordable and therefore accessible to the “everyman.”

Overall, Legal AI is a multifaceted issue since it is both a tremendously beneficial technology and a severely disruptive one. On the one hand, it will benefit the workflow of many law firms and improve the process of law itself. On the other, the technology is guaranteed to allow law firms to cut down on employees and make it even harder for young legal professionals to break into the already very competitive legal job market. If “Legally Blonde” ever gets a sci-fi remake, it’ll for sure have to include a plotline about dastardly legal writing bots and their desire to replace so many poor junior attorneys.

Do AI Paint With Virtual Brushes?

Many people have brought up how AI VA have made contributions to the domestic setting, but how about the art scene?

AI or Artificial Intelligence is defined as ““is the science and engineering of making intelligent machines, especially intelligent computer programs. It is related to the similar task of using computers to understand human intelligence, but AI does not have to confine itself to methods that are biologically observable.”

Back in 2018 the first AI created portrait was auctioned for $432,500.

 

This sparked the debate of wether or not AI generated products could be labeled as art.

So what exactly is defined as art? The belief is that anything generated by AI isn’t original, which I’m not sure if I can completely agree with. AI is fed several images and words before it produces something that resembles the original content, but is that not what humans do? We make art based on our life experiences and everything we see, hear, imagine all accumulates in our mind which we then use to create art. Right now the main difference may just be that AI cannot fully experience all human emotions, and therefore the works it produces have no depth to them. After all it is difficult to look at a painting done by AI and try to find deeper interpretations or emotions in it.

In my personal opinion there seems to be a sort of uncanny valley going on here. After all, I believe that as humans we create and consume art to feel connected to the world. So to know that the work you are seeing was not created by a person who feels emotions as you do makes the work seem a bit unsettling. I think one of the best things about art is that it is imperfect, and although it may be made in the likeness of something, it is not an exact replica. AI therefore is incapable if making human error, so how does this change our interpretation of AI “art”?

Art is often not a solitary thing, but rather it is collaborative and synergetic. How then does AI participate in this? Is the data it is fed a communication with the world? Is the product of AI a collaboration because the data it is fed is real? Or because no humans were directly involved does it make the work solitary?

It seems that every day we discover that AI is capable of doing something new.Like most of you I don’t have answers for the questions above, but this is definitely something we can all think about with the rise of AI inventions.