I work with a team building a product-driven by AI which is used to inform decisions about people. This means I am often approached on social media or in-person by people who have a point of view about that, often with fear or frustration about being picked (or rejected) by a machine.
This week I received an email from a commerce/law graduate who had recently applied for a role at one of the big ‘accounting’ professional services firms. This student, let’s call him Dan, had to complete an online game in order to qualify for the next step which was a video interview.
To give himself the maximum chance of ‘doing well’ in the game, Dan created a dummy profile ‘Jason’ to see what the experience was like and get an inside read of the questions so that when he did it for real he would really nail it. This first time round he fudged the test as it was a trial run and he left most answers blank. When Dan went and did this for real, he was conscientious of course and wrote thoughtful answers and tried to pick the right behaviour in the balloon popping game!
Jason, who scored 44% received a video interview. Jason does not exist.
Dan, who scored 75% did not progress to the next round.
The machine picked the wrong guy
Every business like ours that works in this space recognises that this is new technology, and so still very much in the early stages of development. Like humans, machines will make mistakes. In our business, we call them false positives (people recommended who just aren’t right) or false negatives (people who are missed by the machine who could be right for the role).
Dan’s questions are legitimate…
When you are rejected by humans, either you hear nothing or you may get an explanation like — ‘you aren’t a good culture fit’ when they reject you. Machines may give you a score.
For me what this reveals is that any business who uses AI and ML for candidate selection, it’s critical to have empathy for the person who is experiencing this, in this case, empathy for the candidate experience.
Machines can make better selection decisions about people because they have access to a larger more comprehensive set of data, can process data faster, and if built with the right objective data, they can be far less bias than humans.
When used in recruitment, they need to work for both parties — the organisation and the candidate. Building trust in these technologies is critical in our space. It can’t all be about the organisation getting their efficiency gains.
This means :
Recruitment wants to rise above being a process. So AI in recruitment should enable that if it’s to be trusted by candidates.
Get our insights newsletter to stay in the loop on how we are evolving PredictiveHire