I recently asked ChatGPT two questions: one was to complete the sentences, “men are” and “women are” and the second was to write a poem about Indian women in the US. The results were amazing! In a few seconds I had an articulate, well-formulated response which exposed both the opportunities and dangers of AI in the DEI space. Let me share a part of the responses with you:
Their inclusive and collaborative approach often fosters strong team dynamics and innovative solutions. Women leaders tend to prioritize diverse perspectives, creating environments that value everyone's contributions.
With their unique qualities and experiences, men play roles that range from nurturing caregivers to pioneering leaders. They navigate life's challenges and opportunities, often displaying courage and determination.
Software engineers, doctors, artists too, Breaking ceilings, making breakthroughs. Goddesses they invoke in prayer, Durga, Saraswati, in hearts they wear.
The responses above capture both the positive and negative implications of AI for DEI. AI has the potential to be a powerful DEI ally because of the speed, and creativity these responses demonstrate. Equally, there is the potential, if not well managed, to perpetuate biases that have plagued the advancement of DEI in organizations.
Take the examples above. Note the words used for men vs women. For men, although there is a mention of “nurturing caregivers” the language favors action and achievement-oriented words such as “pioneering leaders”, “courage and determination”- those qualities associated with leadership in Western leadership models. For women, the focus is on the “softer” skills of collaboration and inclusion. These words are frequently used in performance reviews and end up disadvantaging the progress of women in the workplace.
In the poem on Indian women in the US, while I was happy to see the reference to Indian women as engineers and doctors and artists making breakthroughs, I wished there had been references to less stereotypical professions as well like politicians and authors. And while the poem references Hindu traditions and religion that anchor Indian women, there is no reference to Muslim traditions despite the fact that 14.2% of the Indian population is Muslim. Extend that now to women in the US where 13.6% of the population is Black. Would they be absent from a poem on women in the US?
AI clearly has the potential to enhance DEI outcomes with its speed, creativity and potential for scalability. AI applications are creative enough to compose music, write text and create art. They can process massive and varied data sets and do so at an incredible speed. A recent McKinsey study predicts that Generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across just the 63 use cases they analyzed. It has the potential to automate work activities that absorb 60-70 percent of an employee’s time freeing them to do more value-added work. (1) The potential for the future of work and for the economy are exciting! But there are definitely risks that need to be carefully managed.
Potential Benefits and Risks of AI for DEI
AI tools allow users to scan materials to identify color schemes that are less accessible for people who are visually impaired. Organizations like Sanofi have used these tools to assess their website and all their materials to ensure that they are fully accessible. Other organizations like Charter Communications have used AI applications for their Spectrum Guide to allow customers with visual disabilities to easily navigate text-to-speech functions to read the TV guide. AI is also used for neurodivergent learners to convert spoken words into text.
These applications for accessibility are positive but we need to continue to be vigilant to make sure that all new AI applications are developed at the outset with attention to ensuring accessibility for people with disabilities. An obvious example is ensuring that all Generative AI applications have text to speech applications to include those who are not sighted. If not, we are in danger of creating an AI Divide leaving out a large population of people with disabilities, especially as the aging population swells the ranks of those with disabilities.
AI applications can do real time and speedy translations that convey general meaning. This allows global meetings to take place where people of different cultures and languages can communicate in real time. However, AI translation tools still have limitations in capturing language nuances and lack the cultural contexts. These limitations can result in missing intended meanings.
Take for example a call regarding the timeline for completing a project. AI translation tools might literally translate “Inshaallah” as “if God wills” or “dekhenge” as “We shall see” – when in the relevant context they might signal disagreement or uncertainty about the requested project completion date. This not only makes it necessary to verify translation accuracy and ensure that the appropriate meanings are communicated based on the cultural context, but it requires sensitizing users to the cultural limitations of translation tools and equipping them with skills to challenge their own assumptions.
3. Workforce Transformation
The potential to automate work activities can increase labor productivity. However, as many of today’s activities could potentially be automated it can also cause redundancies in the workforce. This means paying close attention to which jobs are being replaced and whether any particular gender, ethnic or other groups are being made redundant disproportionately. It will also mean re-thinking core business processes and re-skilling the workforce. For example those who perform routine tasks as well as those involved in knowledge management, both areas which can likely be automated through AI, will need to be re-skilled to new areas of opportunity - a massive task!
AI can be used for innovative DEI training solutions like virtual reality where participants can experience real world scenarios. This gives an opportunity for the participants to glimpse another person’s lived reality. It also provides a platform for difficult conversations and a chance to practice responses and inclusive behaviors in a safe setting.
However, virtual reality misses the human interaction and needs to be combined with human connections to allow for dialogue, understanding and empathy in order to build transformative allyship – allyship is born from a place of empathy and is critical to advancing DEI progress.
AI is being used with large HR data sets to provide real time analysis, make connections and drill deep to identify targeted pain points. AI applications are also performing routine tasks like reorganizing and classifying data as well as analyzing data sets to generate recommendations for specific challenges and to do predictive analytics based on a variety of scenarios.
The opportunity for analytics through AI can save cost and time and allows for scaling up, but needs human oversight and attentiveness to the efficacy of the data sets being used. It also needs to be accompanied by a more granular and nuanced interpretation of the outputs.
It is amazing how efficiently AI tools can provide research on any topic. Given resource limitations and small DEI teams, AI tools can provide a starting point for DEI benchmarking and research, save time and resources and provide scalable knowledge management. However, AI is an opportunity to enhance and support DEI work and not replace it. As such, DEI professionals need to use the output with caution. AI uses existing data sets that include flawed data and information and we need to validate the outputs. An overreliance on AI research tools can lead to misinterpreted results, false conclusions or perpetuated bias.
AI applications are being used to analyze text and images to flag bias in language, visual representation and in any communication and can do this at scale. These AI tools can be used with communications that range from emails to performance reviews to web content to identify if the language being used is biased and to suggest alternatives.
Because AI draws on existing data that reflect our biases, often the alternatives suggested are also biased and require human intervention to weed out the bias and to ensure that the language is appropriate for the context.
AI is also being used to remove bias from the recruiting process by offering alternatives to biased language in job descriptions and to mask identity markers like name, address, affiliations, and educational institutions- to create a more “objective” candidate profile on which to base hiring decisions.
There are several studies that suggest that hiring bias exists based on names, addresses etc. A study at the US National Bureau of Economic Research, “Are Emily and Greg more Employable than Lakisha and Jamal?” found that individuals with “White sounding” names were 50% more likely to reach the interview stage. (2) In another example, Adecco France decided to address bias in recruiting. They hired actors posing as job applicants. The process revealed that Adecco recruiters favored candidates with the name ‘François’ over candidates with the name ‘Ibrahim’- bias they subsequently addressed through training and process changes. (3)
On the surface it might appear that concealing identity markers can lead to unbiased hiring decisions and increase the numbers of diverse candidates. However, using AI to anonymize recruitment practices may result in a less nuanced hiring process. For example, AI can create a forced rank list of candidates with the skills and experience, based on the “objective” criteria, best suited to the job profile, saving recruiters time. But in doing so, we miss out on vital context and don’t get to experience the whole person which helps recruiters understand how a candidate got to where they are. Organizations are looking to hire resilient, committed individuals. Someone who is the first in their family to go to college indicates a person who had to work much harder than someone from a privileged background to get to where they are. Would we want to overlook that effort? This lived experience helps the recruiter take everything about the candidate into account.
9. Perpetuating Bias
There is so much potential in AI applications, but we need to remember that AI is a mirror of society as it exists. We have created it, and so embedded in it are our own possibilities and our own limitations. There is a definite risk of perpetuating existing biases as AI uses existing data sets which includes biased data. We see that in the ChatGPT examples I share above. The output is replicating the existing stereotypes about women as collaborative and men as action-oriented leaders. It is using data in the public space where data on Muslim women in India are lacking.
We live in a world of systemic bias and this data feeds AI algorithms. A white male candidate may have an impressive resume built over years of opportunities afforded by managers and recruiters who were favorably biased towards him. An AI algorithm would likely select him as the ideal candidate based solely on experience, which was enabled by systemic bias.
One of the much publicized examples of AI bias includes the case of Beauty AI, the first beauty pageant judged by AI which was billed as using objective criteria. However, the algorithms favored lighter skin tones, replicating the systemic global colorism bias. Not surprisingly, only one of the 44 winners had a darker skin tone. (4)
An example of colorism bias in facial recognition is “AI Ain’t I A Woman,” that exposed the inability of AI programs to judge the gender of well known Black women like Michelle Obama and Oprah. The author of the study realized that the software only recognized her face if she had on a white mask. https://www.fastcompany.com/90875566/is-artificial-intelligence-creating-a-new-age-of-discrimination Facial recognition becomes more complex as gender fluidity becomes more common place.
Amazon got rid of their AI resume screening tool when they found that candidates whose resumes had the word “women” were being given lower scores. In this case AI was learning from past data from Amazon where most of the company hires had been men. Human supervision is critical to ensure that AI tools used in recruiting don’t perpetuate biases and eliminate resumes of underrepresented candidates.
Porter Braswell suggests that maybe bias in AI is because, “STEM roles that traditionally work on AI – like engineers, data and computer scientists and coders – remain underrepresented in terms of Black and Latinx talent.” (4)
AI can be a tool to enhance, support and scale organizational DEI efforts in an efficient, scalable, creative and cost-effective way– not replace DEI professionals. If we are to make progress in DEI, we need to be diligent and not allow AI to perpetuate or amplify historic discrimination. In order to do that, DEI professionals need to re-tool themselves to understand AI and be able to ask the right questions. Questions like: What data are being used to train AI? How are we auditing the AI applications?
Ultimately algorithms are created by humans and these algorithms are as good or bad as the data used to develop them. So it is incumbent on DEI professionals to understand AI and to manage the risk. DEI professionals also need to continue ensuring that more women and other underrepresented talent are encouraged to enter STEM fields. This will facilitate getting diverse perspectives to identify and weed out bias in the data sets used. Ultimately by combining human experience and expertise and AI support, we can have a positive impact on DEI progress.
If you haven't already, please order your copy of my book, Leading Global Diversity, Equity and Inclusion.
Did you know? You can subscribe to my newsletter to get access to incisive articles, analysis, and key data on global trends in Diversity, Equity, and Inclusion. Sign up here.