Artificial Intelligence (AI) has a rich history that brings us back to the mid-20th century. According to Harvard’s Graduate School of Arts and Sciences article, the field began to take shape in 1956, when the term “Artificial Intelligence” was coined to introduce the Logic Theorist, a program designed to mimic the problem-solving skills of a human, considered by many to be the first artificial intelligence program.

Many years from that, a lot of changes, as we are now able to gather enormous amounts of information that are too complex for humans to handle alone. More and more, technology has proven highly beneficial across various sectors, including technology, banking, marketing, and entertainment, among others. Soon, it became evident that, even without significant advancements in algorithms, the sheer volume of data and powerful computing capabilities enable AI to learn through brute force.

But the thing is, among all of the advances of AI technology and its benefits, there’s one topic that catches our attention: considering that AI is indeed designed to facilitate and emulate human interactions, we instantly think it reflects the best aspects of humanity when mimicking human interactions, right? Unfortunately, that’s just not the reality. Rather than that, we are replicating and amplifying existing prejudices.

This also sparks one of the many heated debates about the technology: can artificial intelligence (AI) be racist? The more we learn about it, the more we find studies revealing that biases in technology aren’t just glitches, they’re ingrained from the outset. As those systems become increasingly integrated into our daily lives, it is crucial to explore important questions like this one, so we can comprehend the implications of bias in these technologies.

To do that, first, we need to understand: what on earth is this thing called Artificial Intelligence?

Understanding Artificial Intelligence

Artificial Intelligence refers to the simulation of human intelligence in machines that are programmed to think, learn, and adapt. These systems can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.

From this definition, there are two ways in which AI is generally classified: AI based on functionalities and AI based on capabilities. Also according to this system of classification, there are four types of the first one: Reactive Machines, Limited Memory Machines, Theory of Mind, and Self-aware AI. And then there are three types of the second way: Artificial Narrow AI, General AI and Super AI.

AI-based on capabilities

  • Artificial narrow intelligence (ANI): the most used AI that exists today, including the most advanced systems developed to date, and can autonomously perform specific tasks using human-like capabilities but is constrained to the tasks it’s programmed for.
  • Artificial general intelligence (AGI): currently existing only in theoretical discussions, would be a system capable of learning, perceiving, understanding, and functioning autonomously, mirroring human capabilities comprehensively.
  • Artificial superintelligence (ASI): also currently existing only in theoretical discussions, it aims to mirror the multifaceted intelligence of humans but also excels in every task thanks to its vastly superior memory, accelerated data processing, and advanced decision-making capabilities.

AI based on functionalities

  • Reactive machines: representing an early stage of AI development with restricted capabilities, it simulates the human brain’s ability without possessing memory-based functionality.
  • Limited memory machines: machines that not only operate reactively but also possess the ability to learn from past data to inform decision-making.
  • Theory of mind: currently conceptual or in development stages, it aims to enhance understanding and interaction capabilities with entities by discerning their emotions, beliefs, and thought processes.
  • Self-aware AI: currently existing only in theoretical discussions, it aims to mirror human cognition, comprehending and expressing emotions in interactions but also possess its own emotions, needs, beliefs, and potential desires.

Now that we’ve explored different types of AI technologies, it’s important to delve into their inherent challenges and the crucial need for improvement. This exploration is essential for determining whether AI can perpetuate biases, leading to discussions about the ethical implications of AI.

What is bias in Artificial Intelligence?

Bias in Artificial Intelligence (AI) refers to systematic errors or inaccuracies in AI models that result in unfair outcomes, typically reflecting the prejudices or stereotypes of the data used to train the model.

As we already saw, AI itself is a technology created by humans and, for now, it lacks consciousness and intentionality. It means that those systems learn from data, leading us to understand that AI systems can exhibit biased or discriminatory behavior based on what they are trained on.

The biases embedded in technology are more than mere glitches; they’re baked in from the beginning. They are structural biases, and they can’t be addressed with a quick code update. – Meredith Broussard, research director at the NYU Alliance.

Explore further in this article to uncover a few factors that might be contributing to it.

Causes and consequences of AI bias

1. Implementation and deployment

AI systems, when implemented, can lead to biased outcomes. For instance, if an AI used in hiring disproportionately favuors certain demographics due to biased criteria, it can perpetuate discriminatory hiring practices.

Here’s a practical example: Amazon developed an AI resume-reading software in 2014 to address the challenge of processing tens of thousands of daily job applications efficiently. However, the project, which aimed to distinguish between interview-worthy and non-interview-worthy resumes, encountered significant issues. Despite efforts to mitigate bias over two years, the AI system developed a troubling tendency, frequently rejecting resumes from women while favouring those from men with similar qualifications. This bias persisted despite Amazon’s attempts to rectify it, ultimately leading to the project’s abandonment in 2018.

02. Algorithmic bias

Even with unbiased training data, algorithms can still introduce bias. The major sources of algorithmic bias stem from the design decisions made by developers or from unintended consequences of optimisation processes. These decisions include selecting which variables to include, how to weigh them, and defining the criteria for success.

For instance, when we talk about recruitment for example, if an algorithm used in hiring decisions places undue emphasis on educational background from elite institutions, it may inadvertently favour candidates from privileged socio-economic backgrounds, thereby reinforcing existing inequities. Similarly, a credit scoring algorithm that overweights repayment history might disproportionately disadvantaged individuals from marginalised communities who have historically faced financial instability.

03. Lack of diversity in development teams

The design and development of Artificial Intelligence systems are complex processes that require diverse perspectives to ensure fair and unbiased outcomes. However, the reality is that the teams responsible for creating these systems often lack diversity, which can lead to significant oversight regarding biases. This homogeneity can have profound and detrimental effects on the fairness and inclusivity of AI technologies.

04. Historical inequities

If the training data contains biases, like historical prejudices or underrepresentation, AI systems can learn and perpetuate these biases. For example, facial recognition systems show higher error rates.

→ According to a study by the National Institute of Standards and Technology, facial recognition technology shows error rates that are 10 to 100 times higher for Black or East Asian individuals compared to white individuals.

→ In another instance, in New Zealand, a man of Asian descent struggled to have his passport photo approved automatically because an AI program consistently mistook his eyes as if they were closed.

→ Finally, an AI graduate student at Stanford University that requested a photo, in an image-generating AI program, of an “American man and his house”, generated an image of a pale-skinned person in front of a large, colonial-style home. Conversely, asking for a photo of an “African man and his fancy house” resulted in an image of a dark-skinned person in front of a modest mud house.

Future directions

Despite all of the time since this technology was introduced to all, efforts to reduce AI bias seem to be underway. Researchers and organisations are developing frameworks and guidelines for ethical AI development. An example of that is the AI Fairness 360 toolkit, made by IBM, aimed to detect and mitigate bias in machine learning models throughout the AI application lifecycle. And the objective of frameworks like this is clear: to emphasise fairness, transparency, accountability, and inclusivity in AI development and deployment, as we truly hope.

Best practices

While many challenges today may seem beyond our immediate control, as highlighted earlier in this article, there are actionable steps we can take now to foster fairer AI systems. And, embracing these measures is crucial if we are to harness the potential of AI responsibly.

Check out below some key takeaways we separated to help you with that.

Key takeaways

  • Ensure a diverse data representation. Make sure that your datasets are diverse and representative of the populations they aim to serve. Include data from various demographics, geographic regions, and socioeconomic backgrounds to mitigate biases.
  • Adopt ethical frameworks. You can follow ethical guidelines such as those outlined by organisations like IBM AI ethics initiatives, as we previously talked about.
  • Address the lack of diversity in development teams. Diverse teams bring varied experiences, viewpoints, and cultural insights that are essential for developing AI systems that are fair, ethical, and inclusive. They can identify biases early in the development process and ensure that AI solutions benefit everyone equitably.
  • Create a continuous test for bias. To reduce bias, AI solutions must have a hypothesis in place, by continually testing hypotheses as they collect new data, such as developing algorithms that can detect biased patterns or outcomes in the AI’s decision-making process, among many others.

In summary, while artificial intelligence itself is not inherently racist, the systems we create can reflect and perpetuate societal biases. And while there are plenty of reasons to be indeed excited about AI, given its potential to significantly benefit people daily, there are also valid reasons for concern as we see it, and we need to talk about it.

Let us remember, AI is designed to facilitate and emulate human interactions, so it should never reinforce outdated human biases or propagate harmful and inaccurate stereotypes. Systems that perpetuate discrimination, particularly against Black, Indigenous, and People of Colour (BIPOC), are not suitable for deployment.

With all of that said, when it comes to addressing AI bias, it requires a multifaceted approach, involving diverse data collection, rigorous testing, and transparent practices. And we strongly encourage you to stay informed and advocate for ethical AI practices. Afterall, the use of AI must prioritise fairness and inclusion, where technology assists everyone fairly.

Sources

IBM | Understanding the different types of artificial intelligence
Forbes | 7 Types Of Artificial Intelligence
CNN | AI can be racist, sexist and creepy
The Guardian | Amazon ditched AI recruiting tool that favored men for technical jobs
Reuters | Amazon scraps secret AI recruiting tool that showed bias against women
Issues in Science and Technology | How to Investigate an Algorithm
Nature | AI image generators often give racist and sexist results
BBC News | New Zealander says passport photo rejection ‘not racist’

WE ARE ENOLLA CONSULTING, A HUMAN INCLUSION CONSULTANCY.

We partner with our clients to create efficient, compassionate, and engaged working environments through fostering the power of Human Inclusion. Ready to transform your organisational culture with us?