If you’ve been through a recruitment process in the last few years, you’ve probably had experience with an AI-powered recruitment system. The good news about these systems is that they help recruiters efficiently process vast numbers of resumes and can reduce the impact of human bias in the recruitment process. The bad news is that the systems themselves can be biased—because they perpetuate historical biases or because they can be based on flawed science.
Last week, HireVue, one of the major vendors of algorithmic assessment tools killed off its most controversial feature—facial expression analysis. Up until now, job seekers being screened by HireVue would have their video interview analyzed by an AI that assigns certain traits and makes predictions about likely performance, such as “dependability,” “emotional intelligence” and “cognitive ability.” Companies such as GE, Unilever and Hilton use its technology.
Lisa Feldman Barrett, a professor at Northeastern University studies emotional analysis systems. According to Barrett;
“It is a bad idea to make psychological inferences, and therefore determine people’s outcomes, based on facial data alone.”
HireVue may have dropped facial expression analysis but it still uses automated speech analysis, the science of which is an active area of research. While correlations have been observed between speech features and personality traits such as sociability and aggression, the step to predicting job performance remains embedded in propriety software and private systems and therefore difficult to audit. There is just as much of an urgent need to vet any audio-based screening system.
So what next?
Watchdogs and regulators are actively looking at this technology. An Illinois law requires consent from candidates for use of video. Maryland has banned facial analysis and NYC is currently considering regulating hiring software, requiring audit on a yearly basis.
These systems are powerful and have a big impact on one of the most important processes that people go through: getting a job. While regulation may be important, what will really matter is the frontline, real-world implementation of these systems. Reducing the risk of historical bias and automation bias and achieving the promise of this software requires users who understand how to manage their new “machine recruiter.”
We have published a free tool kit to help people adopt these systems fairly, including data selection, vendor screening and ongoing management and audit here.