When a recruiter first screens a resume, they do so for ~6 seconds. So what is it that they are seeking?
A job role can attract thousands of applicants. When recruiters initially screen applicants, they are looking for ‘short-hand’ clues to confirm a pre-existing judgement that ‘predicts’ success, like a particular degree or score. Without knowing it, this method adversely affects the ability to hire the best candidates.
There is a better way.
FirstInterview is a true blind assessment . No demographic details are collected from candidates nor used to influence their ranking. Only the candidates answer to relevant interview questions are analysed by our scientifically validated algorithm to assess their fit for the role.
People are so much more than their CV, yet favouring a name, gender or institute over the individual is a common practice.
We cannot build inclusive industries when we don’t take steps to remove unconscious bias in our decisions when we are hiring. Being aware of our bias is one thing; removing it is another entirely.
It starts with a conversation. And a fair go.
Using FirstInterview means everyone gets the chance to do an interview and an opportunity to tell their story.
Algorithms and Ai learn according to the profile of the data we feed it.
If the data it learns from is taken from a CV, it’s only going to amplify our existing biases. Only clean data, like the answers to specific job-related questions, can give us a true bias-free outcome.
We continuously test the data that trains the machine so that if ever the slightest bias is found, it can be corrected.Here's the science >
"Is there adverse
impact in our
Proportional parity tests assess whether recommendations made by the algorithm align with the proportions of applicants across groups of interest, such as gender and ethnicity in the applicant pool. We use the 4/5th rule (accepted by the EEOC) and Chi-Square test to test for proportional parity.
"Are the assessment score distributions similar across groups of interest?"
Score distribution tests assess whether average scores across two or more groups (e.g. gender, ethnicity) display significant differences. Given candidate recommendations are made using assessment scores, this is a stricter form of testing for score parity across groups than the proportional parity tests.
We use a two-sample t-test, effect sizes and one-way ANOVA test to compare average scores across groups. If significant differences are discovered in the model training stage, those models are not deployed for live use.
"Is the assessment making the same rate of errors across groups of interest?"
We adhere to stricter fairness considerations, especially at the model training stage on the error rates across groups of interest. For example, a candidate who is falsely flagged as “NO” is at a higher disadvantage than a candidate who is falsely flagged as “YES”. Therefore false omission rate and false-negative rate become important error rates to consider equal across groups.
We adopt guidelines set by IBM’s Ai Fairness 360 Open Source Toolkit and the Aequitas project at the Centre for Data Science and Public Policy at the University of Chicago on these extended fairness tests.
It’s hard to detect gaming in multi-choice personality assessments, nor can you detect plagiarism.
Ai is a super detector
Anomalies in answers are detected and flagged, including those that contain:
Here are our latest articles on reducing bias, removing discrimination
and building inclusive workforce through better hiring.
What every business needs to know about unconscious bias in hiring Is unconscious bias holding your business back? When it comes to building your team, it’s easy to fall into a pattern of choosing a […]Read On
Action and data COVID has taught us that on reflection the focus on individual action with a community benefit as a goal is really a focus that leads to the greater good. In our home […]Read On
Let's find a time to chat and make a difference.
Get our insights newsletter to stay in the loop on how we are evolving PredictiveHire