How AI Poses New Risks and Opportunities for DEI Initiatives
As AI becomes more prevalent on an organizational level, it’s important to be aware that AI can perpetuate and exacerbate human biases. While the rise of AI introduces such risks, it also poses opportunities for embedding DEI initiatives on a systemic level.
As AI becomes more and more prevalent on an organizational level, it’s imperative to consider not only the great opportunities and capabilities of such technologies but also the potential consequences.
As is the case with any new technology, proper stewardess is important. But especially when it comes to AI, ensuring equitable and unbiased programming and oversight is vital since human bias can easily be perpetuated by the AI algorithm.
The risks of perpetuating bias are recognized by many in the tech community. Meta, Google, and Microsoft, among others, have identified equitable and fair use of AI technologies as a primary goal. AI media company, Lumi, also has an interesting relationship with DEI initiatives. Colin Kaepernick, former quarterback of the 49ers and civil rights advocate, is the founder of Lumi. For Kaepernick, Lumi is about storytelling, and giving people the tools to tell their own story.
“On my own story — the impact and implications of other people creating their narratives around it and telling it from their perspective — why should that be the case?” Kaepernick commented to TechCrunch. Lumi promotes equitable storytelling, giving people control over their own narratives and fighting gatekeeping in the AI space.
A Coding Perspective
When it comes to equitable and unbiased AI, it’s important to pay attention to creating an unbiased algorithm. In the past, tech companies have fallen short when it comes to combating bias. A classic example of this is facial recognition, which has been found to be race and gender biased.
Coding a completely unbiased AI algorithm can be a challenge, and biases don’t just come from nowhere. Usually, AI perpetuates existing human or societal biases. And sometimes, making AI more inclusive and representative can go too far in the other direction.
Google recently found itself in hot water when AI program Gemini produced historically inaccurate, but diverse photo representations. When prompted, Gemini often refused to produce pictures of white people, even when historically precedent. As an example, Gemini’s algorithm produced photos of America’s founding fathers as Black, or, more problematically, Nazis as Black. Google has since apologized and fixed its algorithm.
Gemini’s recent release shows just how difficult it can be to strike the right balance between eliminating bias, without overcorrecting for it. Gemini overshot, yet many haven’t paid enough attention to bias in the past.
In 2016, when Microsoft released an AI-powered chatbot that was trained on data from Twitter, the bot quickly turned hateful and racist. This is a prime example of how important the type of data an algorithm is being trained on really is. When AI was left in the wild, so to speak, it trained itself on the language of Twitter and perpetuated harmful values by mimicking the speech of a particular subset of users.
In order to build unbiased AI, it’s important to either train the algorithm on unbiased data or to train the algorithm to recognize these biases itself. Yet, in its current capacity, AI is not able to detect bias, especially its more subtle and insidious form of implicit bias.
At the end of the day, AI is only as good, and its scope as extensive, as the data used to train the software. It’s therefore important to train AI on a large dataset that is neither skewed nor misrepresented. Sometimes, data is overtly false or biased, while other times, it may simply fail to paint a complete picture. Bias goes hand in hand with misinformation, and it can be propagated in the same ways because AI will make assumptions, jump to conclusions, or hallucinate in the case of a lack of sufficient information.
In fields such as healthcare, AI will likely reinforce existing inequalities if that algorithm is trained on insufficient or unrepresentative data. In talent acquisition, biased AI could amplify existing inequalities and reduce workplace diversity.
Diversity in Tech
It’s also crucial that the people involved in these conversations and the programming process are from diverse backgrounds. It has long been an issue, in tech as in many fields, that white men have dominated these spaces.
In 2016, a global conference was held on artificial intelligence, in Barcelona. Dr. Timnit Gebru, renowned computer scientist, remembers being one of the only Black women in attendance and recalls the glaring lack of diversity in the crowd. Dr. Gebru created Black in A.I., and was thereafter hired by Google, where she works alongside Margret Mitchel; on what is deemed “ethical A.I.”
It is field leaders like Dr. Gebru and Dr. Mitchell who have helped advocate for more diversity in AI and have drastically helped reduce algorithmic bias.
The Prevalence of AI Bias
AI bias is on many people’s minds, and it should be. This is not only due to the implications of biased AI but also due to its prevalence at present. Research suggests that AI bias is particularly common in image generation, where the rate of bias may be as high as 85%. According to USC research, as much as 38.6% of facts produced by AI are biased.
An HR Perspective
With the rise of AI, considering the potential risks and opportunities of AI through the lens of DEI is imperative. While DEI may be being deprioritized on companies’ annual reports, it shouldn’t be. Increased diversity in a company can positively contribute to a healthy corporate culture, increase stakeholder returns, and help mitigate risk and blind spots.
A Forbes article speaks to the unique challenges and opportunities of implementing AI from an HR perspective. “For HR leaders today, more is being asked of us than ever before,” the article reads. “From understanding and applying AI algorithms to answering complex questions around organizational advocacy in an increasingly divisive sociopolitical environment, we're often operating in realms beyond our usual comfort zone. But among the complexities, challenges, and risks we’re facing, there is also something exciting: opportunity.”
The increasing implementation of AI provides new and exciting opportunities to imbue these values into a more fundamental organizational level, helping to eliminate bias and promote diversity from the jump.
New DEI Opportunities
According to a 2023 EY CEO Outlook Pulse Survey, approximately 65% of CEOs viewed AI technology as having positive potential to ignite business efficacy, while at the same time being wary of potential side effects.
When implemented correctly, AI has the ability to further DEI initiatives, making it a powerful and high-stakes tool. Implementing such initiatives on the ground level as such has the potential to change organizations’ approaches to DEI on a systemic level, helping to eliminate bias and promote inclusivity and diversity.
Due to the AI algorithm's pattern recognition ability, its proper usage could allow for the identification of harmful patterns and help remedy their long histories. AI might identify that one particular group within an organization is being unfairly treated, targeted, or overlooked, and AI could help establish more equitable practices in place of these “oversights.” AI could be useful in everything from closing a gender pay gap, to making sure that no one group is treated differently in the promotion process.
Stay up to date on our latest posts by subscribing to our newsletter and following us on LinkedIn and Instagram.
We tell stories that matter
For over 20 years, Magnet’s mission has been to tell stories that matter so that we live in a more empathetic and just world. We intentionally pursue this mission by:
Having our teams and work represent the broader culture
Choosing projects that have positive societal impacts
Creating a community of thought sharing and leadership