Bias In The Development of AI (Diary of Systemic Injustice)

Britannica defines AI as “artificial intelligence (AI), the ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings.” AI is being implemented into countless technology platforms currently, making functions easier and faster. AI systems are designed to use data to perform tasks and make decisions. But some of the uses of AI, like face recognition, is very susceptible to discrimination.

Joy Buolamwini, a computer scientist of Ghanaian descent, founded the Algorithmic Justice League to combat discriminatory outcomes of AI systems used today. In her TIME article ‘Artificial Intelligence Has a Problem With Gender and Racial Bias. Here’s How to Solve It’, she states that face recognition AI systems are not tested enough on people that are not men or light skinned. Three photos below demonstrate the kinds of misinterpretation AI systems make when attempting to recognize Black women, and the company that developed that system is shown beneath the photos. It falsely identifies them as male, and recognizes an afro as a wig. However, those systems performed terrifically for light skinned men with an error rate of only 1%. On the contrary, for dark skinned women, the error rate skyrocketed to 35%.
 

 

What makes this problematic is that these AI systems are used for a wide range of functions like face recognition for surveillance, forensics, algorithms for advertisements, medical data, etc. If a system cannot differentiate accurately between dark skinned persons, or misidentify them individually, dark skinned people are at risk of being targeted. A surveillance system may recognize a dark skinned person as a threat. Forensic science methods with AI may not be able to distinguish minorities precisely, targeting the wrong people. There have even been recent complaints about social media algorithms, favoring influencers that are not dark skinned. These issues are due to the fact that most people in the tech industry are White males, and are not taking into account that these systems will impact people differently.

Joy Buolamwini founded the Algorithmic Justice League to dig deeper into these issues and prevent gender and racial bias in AI systems. This surely can be solved by improving those systems to be inclusive and not biased. Buolamwini believes that facial recognition systems used by law enforcement, should be temporarily suspended, until these systems have been completely improved. Another way this can be solved is by encouraging women and minorities to join the STEM field and be a part of the development of these new technologies.

britannica.com/technology/artificial-intelligence

time.com/5520558/artificial-intelligence-racial-gender-bias/

youtube.com/watch?v=UG_X_7g63rY

One thought on “Bias In The Development of AI (Diary of Systemic Injustice)

  1. Hi! Thank you for this insightful post. When I first knew Artificial intelligence, I definitely thought that just a new technology and had an excited and open mind about it. I think from your post, I reflected my perspective on technology and would consider more and take social justice into account for sure

Leave a Reply

Your email address will not be published. Required fields are marked *