2018 DEI Trends: Leveraging AI to Overcome Unconscious Bias

Last week I published the first of six posts exploring some of the 2018 trends for diversity, equity, & inclusion (DEI) professionals. This week’s post examines some of the ways we can leverage artificial intelligence (AI) to reduce unconscious biases in the workplace. Much of the content featured in this post was informed by a recent webinar I co-facilitated for the International Association for Human Resource Information Management (IHRIM). IHRIM requested this presentation given the increased use of AI in human resources systems.

As AI continues to evolve, more human resources professionals are beginning to use this technology for a variety of functions. Functions include establishing hiring priorities and reviewing hiring trends to expediting resume screening, standardizing the hiring and onboarding process, and assessing high potential talent. AI also helps HR professionals with improving employee retention, standardizing employee assessments, synthesizing performance review data, and synthesizing exit interview data. Clearly, this technology has the potential of automating some of the more sophisticated decision-making in HR operations.

Despite AI’s enormous potential, it is not a panacea when it comes to eliminating unconscious bias. In its current form, AI is simply an extension of our existing culture, which is riddled with biases and stereotypes. This means, that as we program AI, and as AI learns from us through our words, data sets, and programming, we run the risk of having machine learning perpetuate our culture’s biases. For example, Google’s translation software converts gender-inclusive pronouns from several languages into male pronouns (he/him, his) when talking about medical doctors, and female pronouns (she, her, hers) when talking about nurses, perpetuating gender-based stereotypes.

download.jpg

So how do we prevent AI from perpetuating these biases? First, we must get honest about where biases show up in our workplaces. For example, if our organization has struggled with biased hiring practices against women, we must first name this challenge and then commit to examining the language of our job postings, who these job postings attract, and recognize hiring decision patterns. AI can help us examine job posting language using gender biased words, and alert hiring managers about patterns related to hiring decisions that reveal hidden biases against candidates who are women. When we are aware of where our workplace biases exist, we can leverage AI to monitor and alert us as to when these biases occur.

To help us monitor gender biases in the workplace, we will need to program AI in a particular manner. This includes training algorithms to recognize bias in our data sets. For example, we can program AI to identify old job postings that use gender terminology such as “outspoken,” or “aggressively pursuing opportunities,” which studies show disproportionately attract male candidates and dissuade women from applying. Similarly, words like “caring,” and “flexible,” do the opposition. Over time, as the amount of available de-biased data grows, AI will have an enormous potential to support HR operations reduce decision-making bias, and for right now, DEI professionals must take the extra effort to help program AI to recognize these biases.

One of the most important trends in utilizing AI from a DEI perspective is to remember that accountability and transparency must be at the heart of the discussion. Since we are the ones who design AI, we must center AI standards of excellence around the values of fairness, legality, and transparency. Tech titans including FB, Google, Microsoft, and Amazon already embrace the value of transparency by forming the Partnership on AI. This partnership aims to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

Over the next year, expect DEI practitioners to continue this emerging dialogue around how to best leverage AI to overcome unconscious biases. The good news – many of these practitioners already work in close collaboration with AI programmers in the lab to infuse the interests, perspectives, and lived experiences of stakeholders both within and beyond a workplace that are likely to be affected by AI decision-making. If you are interested in understanding what you can do to make sure that the AI programming you are using in your HR management systems is free from biased data sets, please email RPC today and let’s schedule a time to talk.

Rhodes Perry

Rhodes Perry, MPA is an award-winning social entrepreneur, best-selling author, and keynote speaker. He helps leaders build belonging at work to achieve industry breakthroughs. His firm offers transformative leadership development, change management, and capacity building solutions for senior executives focused on advancing their organizations’ diversity, equity and inclusion (DEI) commitments. Nationally recognized as a LGBTQ+ thought leader, he has two decades of government and nonprofit experience having worked at the White House, PFLAG National, and the City of New York. Media outlets like Forbes, The Wall Street Journal, and the Associated Press have featured his powerful work as a (DEI) influencer.

http://www.rhodesperry.com
Previous
Previous

2018 DEI Trends: Sourcing Candidates with Non-Traditional Credentials

Next
Next

2018 DEI Trends: Embracing an Inclusive "Diversity" Definition