Skip links

AI Without Bias: Building an AI System You Can Trust

Artificial intelligence (AI) is empowering businesses to maximize data more than ever before. With AI’s sophisticated ability to identify patterns and trends from large amounts of data, organizations can make better, quicker decisions that unlock new forms of value. However, can we really trust AI? Is it always going to be right, and if not, what can we do about it?

As AI allows businesses to understand their customers more deeply, it opens the doors for new products and highly personalized customer experiences. For internal business affairs, Human Resource officers may utilize AI to expedite analysis of incoming applications and pull out key identifiers from their applicant pool. 

However, what happens when AI decides that trivial characteristics make a potential employee unqualified? Or if AI decides to take into account highly sensitive, personal information to affect how businesses should treat their consumers?

AI has also shown real value in various critical fields, such as in healthcare, where it has the potential to aid doctors in providing timely and accurate diagnosis. AI can aid in analysis of weather and climate patterns, and suggest potential solutions to long-standing issues. Furthermore, its ability to process large amounts of data in a short amount of time gives AI the potential to be a critical tool in times of crisis, where decisions need to be made quickly. 

However, in these incredibly context-driven situations, does AI really have the ability to make the right calls? Can AI-generated treatment plans and evacuation routes take into account the complications of each individual affected?

AI’s Major Pitfalls

AI models are trained by source data, which comes from humans and the research and data collection that has been done before. Therefore, the biases that have existed within our research for decades gets fed into the AI tools that are tasked to make decisions today. This points to one of AI’s biggest weaknesses – that AI is not a critical thinker. It simply intakes information and believes it to be true.

Putting too much trust in AI has had disastrous effects, from ChatGPT and Gemini exhibiting racist and sexist behaviors when used to screen and advertise roles to potential hires, to the National Eating Disorder Association of America deploying an AI chatbot that gave callers harmful dieting advice

Grok, X's AI chatbot
Image credits: Mashable

This can be catastrophic to the reputation of any organization. Especially as it’s become clear that chatbots feel free to lie to customers – as the scenario with Air Canada & its chatbot’s misinterpretation of bereavement policies, or the scenario with X’s chatbot, Grok, falsely accusing NBA star Klay Thompson of vandalism, or the scenario with New York City’s AI chatbot telling small businesses it’s okay to serve customers cheese that has been bitten by a rat, and this is all in 2024 alone. 

But, if we can’t trust AI to tell us the truth or to make the right decisions, what can we do? 

Key Considerations for Leaders Using AI Tools

For end-users, this is why it is so important for organizations to keep humans in the loop with decision making. At the end of the day, AI cannot emulate the compassion, creativity, and contextual thinking of humans. Before integrating any AI solution into an organization, leaders should invest the time to develop thorough governance and security policies on when AI can be used, how it should be used, and implement checks and balances that monitor AI usage and its outputs.

Ultimately, the solution lies in how we train and build AI models, and what we train them for. 

On the side of organizations – whether these are hospitals, schools, or corporations – starting to look into AI solutions, there is great importance in understanding how these models are trained, and what they are going to be used for. Leaders in these organizations must identify a clear distinction between decisions that can be made by AI models, and those that require human intervention. 

Leaders should ask questions on: 1) the quality of the source data, 2) the impact of the decision, and 3) what we are asking the AI to do in relation to its actual capabilities.

1. Quality of data: How robust and unbiased is the existing data set? What types of biases may be prevalent in the data? Does the data leave room for AI to guess or misinterpret?

2. Impact of decision: How important is the decision in question? Does it have potential to greatly change people’s ways of thinking, being, and living? Does it encroach on ethical issues and values?

3. What we are asking of AI: Are we playing to AI’s strengths? Can the decision be made objectively through pattern-recognition or sorting through large amounts of data? Or are we asking AI to do innately human things, like empathize, contextualize, or care?

The answers to these questions provide clarity on the level of human involvement required in certain decisions, and the readiness of AI to take on certain tasks.

Furthermore, beyond training, companies must invest in continuous monitoring of their AI tools, checking whether AI is able to do its job well. Routinely, we should be questioning whether this role is best done by AI or if it should be done by a human instead.

Key Considerations for Leaders Developing AI Models

On the side of training and building models, it is imperative for developers to review what data serves as the source of truth of any AI model, to recognize the potential biases within this data, and to take the time to build the bias out of them. In order to do so, developers must consider three things: 1) the source of the data, 2) the scope of the data, and 3) the implications of the use of the resulting AI model.

Deep-dives into where the data comes from will allow developers to identify potential biases that come from the researchers of the original source data. Data that only considers a certain group of people, a certain area, or research that comes from a party with clear vested interests should raise red flags to developers as this data may be insufficient as the sole source of truth for any AI. Identifying the scope and limitations of said data is key in deciding whether the AI model needs to be trained with more diverse data.

Lastly, developers should consider the risks and implications of using the AI model. Questions such as: ‘What will this AI model be used for, and can it sufficiently fulfill its purpose given the data it was trained with?’ and ‘What are the risks associated with the use of this AI model? Is the AI model developed so that these risks have been mitigated?’ challenge developers to consider the end-users of their AI tools and their impact.

Developers might wonder: why go through the trouble? What does it look like when developers fail to take these into consideration?

Developers risk designing their AI model to interpret behavior based on mismatched data, leading these models to make conclusions and decisions that are directly misaligned with strategic objectives. For example, an AI model that uses data mostly from Europe and America is likely to wrongly predict consumer behavior for end-users hailing mostly from Asia. 

On the other hand, they also risk their AI models becoming responsible for more consequential, even unethical decision-making. Health providers across the US were found to be using a biased AI tool that prevented African American patients from getting necessary healthcare. The main flaw in this tool was that it relied on source data that ascribed lower healthcare spend and lesser access to healthcare to an overall lesser need for care.

The bias of AI models results in weak conclusions at best, and at its worst, it provides recommendations that are potentially harmful and detrimental.

The Challenge for Humans

At the end of the day, all this is only possible if we humans are ready to challenge the ways we view the world.

The bias that AI holds is a reflection of the biases that we humans have had for decades. As existing source data includes human biases, the key to developing unbiased AI models is for us to break down the biases in our own minds first.

Need help in building your AI governance framework and developing AI policies? Reach out for a conversation.