Skip links

Navigating AI’s Ethical Challenges to become an AI Champion

Reports of massive AI blunders and ethical controversies have become commonplace, almost to the point that the question has become when–not if–another AI issue will arise. This has led some to believe that AI is incompatible with values such as sustainability and human-centeredness. However, these issues are not inherent to the technology.

What we are seeing now is the result of how leaders decided to act when AI excitement was at an all-time high. During AI’s peak, little conversation was happening among top management about how to deal with the ethical and moral dilemmas that have arisen with AI’s increased popularity–such as its consequences on employment, the environment, and the trust and power we put into it. 

We saw organizations respond in many ways. There were leaders who believed in the technology and immediately jumped on board. On the other hand, there were leaders who were wary of AI and the change it would bring, who felt like diving in would cost them their organization and their people. 

There were also many, many leaders, who were placed under immense pressure to drive down costs and keep up with competition, that had rushed their plans for AI adoption and thus, have failed to consider the true costs of AI, which go far beyond the upfront financial investment for AI tools.     

Source: The New York Times

The Situation at Hand: Ethics vs AI

For ethical issues, we need to look no further than the industry that built AI. Within the tech industry, many major players such as SAP, Google, and Microsoft have laid off employees to increase investments in AI. Furthermore, organizations with sustainability goals have also struggled to reach these as they increase their AI usage. Specifically, Microsoft set its goal of becoming carbon negative back in 2020. However, due to their investments in AI, their emissions have increased by 30% in 2023. There’s also concern regarding AI models adopting negative stereotypes and bigoted beliefs, and despite anti-racism feedback training, research has found that models by OpenAI, Facebook, Google, and others have become more covert with their racism, rather than dismissing racist views altogether.

However, these issues are not isolated cases, and we have heard first-hand about the worries organizations face. In our conversations with leaders from various industries, we’ve heard sentiments such as “There’s a [big] people aspect of the AI implementation…that I’m not sure has been considered,” and “[In regards to a AI integration projects] I’m not sure how much effort has been put into measuring the change that will come for the people and for the organization”

In a recent survey by Deloitte, only 22% of leaders felt that their organizations were ready to face talent-related concerns brought about by AI, meaning over 75% of leaders surveyed feel unequipped and uncertain about how to tackle these employment issues.

At Awaken Group, we’ve heard leaders share that they are hesitant to invest in AI because of concerns regarding their employees’ job security, and that the decisions that come with AI integration do not align with their values. This rings especially true in family businesses that have built a strong, close-knit community across their organization. To choose AI feels like choosing technology over the people that have helped build the company from the ground up.

Leaders may feel like the decision is between AI’s cost savings and potential for value versus its threat to employees, the environment, and the trust in the organization. However, we believe that intentional and holistic AI investment is the key to mitigating these risks while allowing organizations to maximize the value that can come with AI. 

 

The AI Success Matrix

When it comes to AI adoption, successfully balancing business and ethical risks with high potential for reward comes down to two main factors–intention and investment.

Leaders’ intentions for exploring AI varies, some may do so because they have succumbed to the pressure or the fear of being left behind, so they intend to use AI as a way to catch up to the cost-cutting. There are others who have a vision of how AI can change the way their organization operates, and they intend to embark on this journey of transformation. 

Meanwhile, investment covers all costs of AI adoption. While many organizations feel the brunt of this lies in financial costs, proper investment also includes the time and manpower involved in  developing the necessary expertise, infrastructure, and culture, as well as a thorough governance framework, and cross-functional, robust strategy to approaching AI.

Within this matrix there are four quadrants that show four archetypes of leaders, their approach to AI adoption, and the outcomes based on intention and levels of investment. 

Source: Awaken Group

AI Avoidant. These leaders are fueled by their fear of AI, focusing on the uncertainty of its results and the difficulties that it can bring, and thus do not invest anything into AI adoption. In AI Avoidant organizations, it’s not that AI is not used at all. Rather, AI is more likely used on an individual level, with employees using Chat-GPT or other publicly available tools that allow them to get tasks done with less time and effort. A potential threat here is the lack of control and governance that management has over the use of these tools, and the possibility for data breaches and mishandling of sensitive information.

AI-Pressured. Alternatively, there are leaders whose fear manifests in pressure of lagging behind, and thus, their intention is to move fast. They value speed of adoption above all, wanting to play catch up with competitors. This leads them to take a tech-only approach, investing a lot of money in top-of-the-line AI tools, yet failing to consider AI’s greater role as a strategic imperative in their organization. Ultimately, their adoption efforts focused solely on trying to buy the right tool. Their organizations have yet to fully unlock the value of AI, and may have to backtrack as they experience issues on AI accuracy and employees’ concerns of layoffs due to job redundancy. 

AI-Adrenalized. Conversely, there are leaders who have a vision for AI and are overwhelmed with excitement for what new innovations and process improvements the technology can bring. However, in their rush to pioneer, they jump head-first into AI exploration, only to fall flat when they are faced with ethical issues due to a lack of an AI strategy and policies to ensure safe and secure use. Their organizations have yet to unlock the value they have envisioned due to the gaps in their planning.

AI Champions. These are leaders who have a strong vision for their AI, who understand not only its potential for value, but its ability to truly transform an organization – and just how difficult navigating this could be. These leaders experience success with AI because of the effort they put into championing buy-in across their organizations, developing a proper governance framework, and rounding up the right people to create and execute a holistic AI strategy. 

With planning done right, these organizations have been able to anticipate potential threats, but also have the right structures in place to address ethical concerns as they come. For these organizations, AI adoption carves a new path forward, unlocking new value, not just for the company, but for their employees and customers alike.

 

Movement along the Matrix

Source: Awaken Group

As illustrated in the matrix, the difference between a successful AI adoption and one that has failed ultimately boils down to intention and investment in execution – answering the questions: why do we want to use AI? And how will we get there?

What does this mean for leaders who realize their adoption falls anywhere but in the winning quadrant? The good news is that movement along the matrix is possible, and that identification of where one’s organization lies is the first step. 

Whether their initial AI plans lacked intention or effort, leaders should work to identify the gaps in their adoption, regroup with top management, and be ready to tackle these issues head on.

 

Intentional Investment Addresses Ethical Concerns

The issues on ethics may be daunting, and at times it seems that no matter what, these will always plague the use of AI. What does more intentional investment look like in these situations?

When it comes to the fear around AI’s effects on employment, investing in training to upskill and reskill employees is key to minimizing redundancy. Leaders looking for a quick AI implementation typically leave their employees out of the equation. However, as organizations use AI to unlock new opportunities, their people can further create new value in new, more meaningful roles. 

When it comes to AI’s impact to the environment, investments in clean energy and green AI are ways to pioneer a sustainable approach to AI. While AI can be instrumental in helping solve climate issues, its large consumption of resources has potential to outweigh any of its positive impact. As its popularity increases, the consumption of energy and water continue to rise at a rapid pace, and without knowing, organizations end up using more and more of these resources as they increase their AI use. Thus, there is a growing need for responsible usage, accountability from AI firms, and renewable energy-powered data centers.

Lastly, when it comes to the biases that are inherent in AI models today, leaders need to invest the time and effort in creating robust AI governance plans and decision-making frameworks that keep people in the loop. Without this, AI tools can be faulty, like the unreliable AI drive thru project that McDonald’s recently ended this year. In more extreme circumstances, AI tools can spread misinformation, such as when Air Canada’s chatbot had falsely promised bereavement discounts to a grieving passenger. As we look to maximize AI’s data processing and prediction capabilities, we must also consider the importance of a human touch in our decision-making–as humans hold the unique capacity to contextualize and to care.

As the pressure to overcome AI’s ethical dilemmas continues to rise, the gut reaction now might be to cut down on AI adoption costs and minimize AI use. However, it is actually an increase in time, effort, and even investment to drive purposeful, cross-functional adoption that will allow companies to succeed with AI.

If you recognize yourself as an AI Avoidant, AI-Pressured, or AI-Adrenalized leader, and are keen to become an AI Champion, reach out to us and we can help you get started. 

 

For those interested in exploring the ethical approach to AI further, stay tuned for our next three articles which will go deeper into AI and employment, AI and the environment, and AI and its bias.