Should You Use AI to Assess Candidates?

Authors: Tomas Chamorro-Premuzic and Reece Akhtar

Should You Use AI to Assess Candidates?

Few things seem creepier than algorithms mining our voices, résumés, or photos to determine whether we should be considered for a job, and yet this is now the reality for most job seekers. What’s more, it may not be as creepy as you think.

For starters, all organizations struggle with talent identification, which is why many complain that they are unable to find the right person for key positions and why most people end up in jobs that are far from inspiring. Consider that even in the biggest economy in the world, where talent management practices are far more science-driven and sophisticated than anywhere else, the labor market is quite inefficient.

Today in the United States, there are around six million job seekers for eight million job openings. Even if we look at the global knowledge economy, comprised of the most qualified and skilled cognitive elite (roughly the one billion people who are on LinkedIn), job satisfaction is the exception rather than the norm: It is estimated that as many as 70% of these top talented individuals are open to other, hopefully more meaningful or interesting, jobs or careers. Elsewhere, the norm characterizing recruitment and hiring processes is considerably more backward, with hiring managers overemphasizing hard skills at the expense of the more important and critical soft skills or using intuitive yet biased hiring methods, such as the unstructured job interview, to determine who gets the job. All the while, predictive assessments and data-driven tools are largely underutilized, and the prevalence of prejudice, bias, and discrimination is everywhere.

In short, if we want to make talent identification more effective—and more meritocratic—it’s important to continue to look beyond existing methods, particularly if recent technological innovations such as advances in machine-learning and generative artificial intelligence enable us to predict, understand, and match people at scale.

One of the major problems with the way we currently interview job candidates is that the process is largely unstructured, leaving the questioning to the whims and fancies of the interviewer. It shouldn’t take much convincing to see how this is not only inefficient, but also leads to biased decision-making as interviewers express and seek to confirm their own preferences.

Video or digital interviews can remove these limitations almost entirely. Using generative AI to create a highly structured and standardized interview experience, every candidate can be presented with the same set questions and be given the same opportunity to express their talent, which ultimately improves the video’s predictive utility.

But while digital interviews provide a fairer interview experience for candidates and allow organizations to access more diverse talent, when it comes to reviewing these interviews, we run into the same problems—­biased humans are left to make the hiring decisions. But what if AI and machine-learning algorithms were tasked with mining the data from these videos to identify reliable connections between what people do and say during interviews, and their personality, ability, or job performance? In the case of digital interviews, AI algorithms can mine a candidate’s facial expressions and body language, along with what they say and how they say it.

Mining all this data can reveal a lot about the candidate’s talent and can indicate how they might perform on the job. Scientific research in this area has been steadily growing, revealing interesting and promising findings. For instance, researchers have trained algorithms that mine various characteristics of an individual’s voice (e.g., vocal pitch, loudness, and intensity); body movement (e.g., hand gestures, posture, etc.); or facial expressions (e.g., happiness, surprise, anger) to accurately predict their personality profile, which we know is one of the leading predictors of job performance.

Going further, researchers have mined similar signals to predict behaviors and qualities that are critical for performance: communication skills, persuasiveness, stress tolerance, and leadership. Further revealing how insightful this technology truly is, a team of researchers used the aforementioned technologies to quantify the emotionality of CEOs as they spoke in conference calls to accurately predict the firm’s future financial performance.

AI has the potential to significantly improve the way we identify talent because it can reduce the cost of making accurate predictions about a candidate’s potential while at the same time removing the bias and heuristics that so often cloud human judgment. The fact that AI algorithms can detect and measure latent or seemingly intangible human qualities may lead some to be skeptical of the findings discussed here, but it is worth noting that there are plenty of scientific studies that demonstrate that humans can accurately identify personality and intellect from just thin slices of verbal and nonverbal behavior. AI algorithms simply leverage the same cues. The difference between humans and AI is that the latter can scale and can be automated. What’s more, AI does not have an ego that needs to be managed.

Currently, many organizations that use digital interviews do not leverage these types of powerful AI analytics, as their recruiters are often unwilling to accept the algorithm’s recommendations and continue to rely on their own naive judgement. Sadly, this behavior is harming both the candidate and the organization. The HR departments that realize that science and data, and not intuition or instinct, should be the basis for decisions will attract and retain the best talent. Of course, we do not advocate that all hiring decisions be made by an AI system. There must always be human oversight. Instead, we believe that human decisions can be significantly improved if there is accurate and valid data to inform and shape our judgments. For example, AI-enabled digital interviews can be used to analyze candidates’ soft skills, technical competence, facial expressions, vocal tones, and body language. These insights can then be integrated into a comprehensive candidate profile, offering hiring managers a detailed and objective evaluation. This approach not only enhances the predictive validity of the hiring process but also reduces biases, leading to more equitable and effective hiring decisions. Importantly, it’s also crucial to design these AI tools with user experience in mind to ensure a fair and transparent process that respects and engages candidates.

Of course, it’s essential to consider the legal and ethical implications of using these innovative tech tools, just as we do when we consider using traditional assessment methods. AI systems can end up learning all sorts of harmful biases of their own, depending on the data they’re trained on, among other factors. There are new regulations from New York, California, and the European Union placing pressure on companies to pay attention to how these systems are trained, and also regularly audit them for potential bias. Also, clearly, there is now a difference between what we can know about people, and what we should know about them, with the possibilities surpassing both legal and ethical boundaries.

Yet at the same time, it’s still possible to deploy innovations like the ones we describe here while operating within the constraints of good codes of conduct. Candidates can be fully briefed and debriefed about the technologies being used to evaluate them, and should be invited to actively opt in. Organizations should fully protect and keep safe all sensitive data, and the entire process should be transparent. In fact, it is even possible (and advisable) for candidates to have ownership of their data and results, which they may voluntarily decide to share with selected recruiters and employers—or not. While this scenario may seem more utopian than the emerging technologies we described, we would like to urge recruiters and employers to consider it. After all, there is no tension between understanding job candidates well and helping them understand themselves better. Organizations—and individuals—will benefit enormously when new technologies can boost their ability to place the right person in the right job.

Three Strategies to Match the Right Tools to the Right Processes

Making great hiring decisions is the result of matching the right tools with the right processes. Practice these three strategies to increase your chances of finding and recruiting the very best talent:

  1. Adopt structured and standardized practices: To ensure a fair and consistent hiring process, organizations should adopt structured and standardized evaluation methods. By using a set of predetermined questions and criteria for all candidates, and implementing hiring scorecards, recruiters and hiring managers can objectively assess responses, minimizing the influence of personal biases. Structured interviews not only enhance the reliability of the hiring process but also provide a clear framework for comparing candidates on an equal footing.

  2. Leverage predictive assessments and analytics: Incorporating scientifically-validated assessments and AI-driven tools can significantly enhance the accuracy of hiring decisions. Predictive assessments, such as cognitive ability tests and personality assessments, offer valuable insights into a candidate’s potential beyond their résumé. AI and data analytics can further refine this process by analyzing video interviews for non-verbal cues, providing objective insights into a candidate’s soft skills. These technologies help in identifying patterns and predicting success, thereby making the hiring process more data-driven and less reliant on subjective judgment.

  3. Focus on continuous training to mitigate bias: Continuous training is crucial to keep hiring managers updated on the latest recruitment technologies and best practices. Regular workshops, webinars, and industry conferences can provide the necessary skills to use new tools effectively. Moreover, it’s essential to provide bias training and conduct regular audits of hiring practices. This ensures that both human and AI decision-makers are free from prejudices, promoting a more inclusive and equitable hiring process. Ensuring that AI tools are trained on diverse datasets and regularly audited for potential biases is also critical.


Tomas Chamorro-Premuzic is the chief innovation officer at ManpowerGroup, a professor of business psychology at University College London and at Columbia University, cofounder of deepersignals.com, and an associate at Harvard’s Entrepreneurial Finance Lab. He is the author of Why Do So Many Incompetent Men Become Leaders? (and How to Fix It), upon which his TEDx Talk was based. His latest book is I, Human: AI, Automation, and the Quest to Reclaim What Makes Us Unique. Find him at www.drtomas.com.

Reece Akhtar is CEO and cofounder of Deeper Signals. He is an organizational psychologist, data scientist, and visiting lecturer at NYU. He is the author of The Future of Recruitment: Using the New Science of Talent Analytics to Get Your Hiring Right.

Please Log in to leave a comment.