Rarely is hiring somebody a single decision, but one made from a number of smaller decisions along a journey to a final one. As recruitment has become more sophisticated as an industry, so has our understanding of what can be flawed about the decisions humans make including the bias and subjectivity we bring when screening and interviewing candidates. These are essentially human traits that even the most well-intentioned of us cannot escape.
This does not mean we have to eliminate humans from hiring decisions to make it fairer – that would be problematic too – but rather that we have to use technology at strategic moments in hiring to improve our decision making. Our tendency to be biased is often related to the pressure we are under to make faster decisions. Again, this is human. When looking at thousands of CVs for example, our brains create shortcuts for us to process information that, quite frankly, we are unable to absorb. So we start scanning things based on our own biases in an unconscious way picking out schools that appeal to us, experiences that sound similar, names that feel familiar and people who ‘seem’ like others that we know.
Predictive tools that parse and score CVs, and help hiring managers assess potential candidates are unfortunately not helpful here, because they too, learn from us to favour certain characteristics that we do from CV data. Ultimately using CV data replicates institutional and historical biases, amplifying disadvantages lurking in data points like what university was attended, what gender someone is, how old they are or even what recreational clubs they belong to. A well publicised example of this was when Amazon tried to build a recruiting engine based on observing patterns in resumes submitted to the company over a 10-year period. Most of them were men, a reflection of male dominance across the tech industry. The result: the input data informed the machine learning that it didn’t like women.
The better approach is to use objective data and bias mitigating technology at the right moments in a recruiting process. It’s a way of letting the algorithms do the hard work of delving into the details that humans miss when making decisions under time pressure using biased mental shortcuts. This way we can build better accuracy than if humans alone were making decisions on their own, particularly in the early decision making or top of the funnel recruiting, with much higher efficiency given the speed of algorithms. We still need to test constantly for bias in these hiring algorithms, but by utilising them at the right moment we can help hiring managers make better – more human – decisions.
“When making decisions, think of options as if they were candidates. Break them up into dimensions and evaluate each dimension separately. Then – Delay forming an intuition too quickly. Instead, focus on the separate points, and when you have the full profile, then you can develop an intuition.”
Daniel Kahneman Psychologist & Nobel Laureate
How do we help humans make better hiring decisions at Predictive Hire?
We use objective data The ability to assess someone’s suitability to do a job is not made using CV data, but rather from information we gather from answering five open-ended questions via a text chat that is ‘blind’ i.e. no identifying information is given to the hiring manager. In this model everyone gets an interview. Using advanced Natural Language Processing (NLP), we can determine a lot about someone from analysing their text answers. While a standard Myers-Briggs assessment identifies 16 personality types, based on essentially answering repeated questions, this new way of looking at language can account for 400+ personality types and counting. There is no way a human brain could distinguish these differences in people. This means we can truly identify job fit for all the candidates we screen – without bias – based on what hiring managers have identified as the skills deemed necessary in their ideal candidates. These skills and abilities cannot be uncovered in any other way.
👉 See our product in action here.
We constantly test for bias Being aware that bias can exist in any data is not enough, you need to constantly test your algorithms for any emerging patterns that mimic human bias. Using a number of tests we are continually looking at our results to make sure that we are not amplifying bias in any way. Our results have shown that it is possible to mitigate bias using algorithms for better hiring outcomes. A recent piece of research looking at the hiring of Aboriginal and Torres Strait Islander peoples, the Indigenous peoples of Australia showed that we can elevate marginalised groups. Other research we have done has also proved we create a fair outcome for people who have English as a Second Language 👉 See our approach to Ai here
We help you calibrate team hiring decisions Ultimately, final hiring decisions do fall back on humans, but this is also where technology can also be used to guide and calibrate scoring that hiring managers make when interviewing candidates. Decisions backed by data minimises the risk of bias, making hiring conversations more robust, and less subjective. Using standardised scoring that is live, the impression a candidate makes on a hiring manager is ranked against other assessors, as the interview is being conducted. It’s not about replacing human decision makers, but elevating their ability to make smarter, more transparent decisions, we cannot make without the help of technology. 👉 See how we can help humans interview.
Continuous learning via feedbackHuman decision making is unscalable. The more people you add to scale decisions, the more inconsistencies and biases you will be adding to the process. Moreover, humans are limited in their capacity to learn from objective feedback data such as which profiles of people work well in a given environment. This is where data-driven approaches like machine learning are far superior. Machine learning models are able to learn continuously from large amounts of feedback data, which candidate profiles are more likely to succeed than others. This ability to retain knowledge and then be able to explain how it arrives at a decision helps organizations to truly learn from their bad hires and keep nudging the hiring outcomes towards growth. Working together, recruiters and hiring managers can benefit from the learnings of AI in challenging their views and making the right hiring decisions.
An interaction that is familiarText chat is how we truly communicate asynchronously, i.e. on your own time – we all do it everyday with our friends and family. It needs no acting; It is blind to how you look and sound. We all know how to chat. Candidates feel comfortable using chat, as they are in a familiar setting, unlike playing a neuro-science game, a one-way video recording or a psychometric test etc which are unfamiliar or artificial experiences. Many don’t enjoy them as they are made to behave in ways they usually don’t. This high engagement, which we capture via post interview feedback, is also a driving factor in capturing authentic data as candidate’s reflect and express in their own way.