2018 DEI Trends: Leveraging AI to Overcome Unconscious Bias

Last week I published the first of six posts exploring some of the 2018 trends for diversity, equity, & inclusion (DEI) professionals. This week’s post examines some of the ways we can leverage artificial intelligence (AI) to reduce unconscious biases in the workplace. Much of the content featured in this post was informed by a recent webinar I co-facilitated for the International Association for Human Resource Information Management (IHRIM). IHRIM requested this presentation given the increased use of AI in human resources systems.

As AI continues to evolve, more human resources professionals are beginning to use this technology for a variety of functions. Functions include establishing hiring priorities and reviewing hiring trends to expediting resume screening, standardizing the hiring and onboarding process, and assessing high potential talent. AI also helps HR professionals with improving employee retention, standardizing employee assessments, synthesizing performance review data, and synthesizing exit interview data. Clearly, this technology has the potential of automating some of the more sophisticated decision-making in HR operations.

Despite AI’s enormous potential, it is not a panacea when it comes to eliminating unconscious bias. In its current form, AI is simply an extension of our existing culture, which is riddled with biases and stereotypes. This means, that as we program AI, and as AI learns from us through our words, data sets, and programming, we run the risk of having machine learning perpetuate our culture’s biases. For example, Google’s translation software converts gender-inclusive pronouns from several languages into male pronouns (he/him, his) when talking about medical doctors, and female pronouns (she, her, hers) when talking about nurses, perpetuating gender-based stereotypes.


So how do we prevent AI from perpetuating these biases? First, we must get honest about where biases show up in our workplaces. For example, if our organization has struggled with biased hiring practices against women, we must first name this challenge and then commit to examining the language of our job postings, who these job postings attract, and recognize hiring decision patterns. AI can help us examine job posting language using gender biased words, and alert hiring managers about patterns related to hiring decisions that reveal hidden biases against candidates who are women. When we are aware of where our workplace biases exist, we can leverage AI to monitor and alert us as to when these biases occur.

To help us monitor gender biases in the workplace, we will need to program AI in a particular manner. This includes training algorithms to recognize bias in our data sets. For example, we can program AI to identify old job postings that use gender terminology such as “outspoken,” or “aggressively pursuing opportunities,” which studies show disproportionately attract male candidates and dissuade women from applying. Similarly, words like “caring,” and “flexible,” do the opposition. Over time, as the amount of available de-biased data grows, AI will have an enormous potential to support HR operations reduce decision-making bias, and for right now, DEI professionals must take the extra effort to help program AI to recognize these biases.

One of the most important trends in utilizing AI from a DEI perspective is to remember that accountability and transparency must be at the heart of the discussion. Since we are the ones who design AI, we must center AI standards of excellence around the values of fairness, legality, and transparency. Tech titans including FB, Google, Microsoft, and Amazon already embrace the value of transparency by forming the Partnership on AI. This partnership aims to study and formulate best practices on AI technologies, to advance the public’s understanding of AI, and to serve as an open platform for discussion and engagement about AI and its influences on people and society.

Over the next year, expect DEI practitioners to continue this emerging dialogue around how to best leverage AI to overcome unconscious biases. The good news – many of these practitioners already work in close collaboration with AI programmers in the lab to infuse the interests, perspectives, and lived experiences of stakeholders both within and beyond a workplace that are likely to be affected by AI decision-making. If you are interested in understanding what you can do to make sure that the AI programming you are using in your HR management systems is free from biased data sets, please email RPC today and let’s schedule a time to talk.

Rhodes Perry

Rhodes Perry is a nationally recognized expert on LGBTQ and social justice public policy matters, with two decades of leadership experience innovating strategy management, policy and program solutions for corporations, government agencies, and non-profit organizations. At his core, Rhodes is an entrepreneur, where he most recently established Rhodes Perry Consulting, LLC, a national diversity and inclusion consulting firm that uses an intersectional approach to collaborate with leaders on creating solutions in the practice areas of strategy management, issue advocacy, and stakeholder engagement. Previously, Rhodes founded the Office of LGBTQ Policy & Practice at the New York City Administration for Children’s Services, and prior to this assignment he served as the founding Director of Policy at PFLAG National where he led the policy strategy and advocacy efforts for the organization’s 350 chapters. He cut his teeth serving as a Program Examiner at the White House Office of Management & Budget, where he improved upon federal benefit programs designed to provide assistance to low-income communities. Rhodes earned a Bachelor of Arts in Economics and Gender Studies from the University of Notre Dame, and obtained a Master of Public Administration from New York University.