• Resources
  • Blog
  • PredictiveHire welcome the EEOC Initiative on Artificial Intelligence and Algorithmic Fairness

Hiring with AI, fairer, faster and better

PredictiveHire welcome the EEOC Initiative on Artificial Intelligence and Algorithmic Fairness

BY Team PredictiveHire

post-fetured-img

Recently the The U.S. Equal Employment Opportunity Commission (EEOC) announced it was  launching an initiative to ensure that artificial intelligence (AI) and other emerging tools used in hiring and other employment decisions comply with federal civil rights laws that the agency enforces.

This is a strong step in the right direction. As many of you are aware PredictiveHire has long advocated for the accountability of vendors in the market around the responsible development and application of Ai hiring tools.

Removing bias is not something that any solution can provide – not even many that claim they are. It’s complex and we pioneered this ability through focussing on solving this for the last three years working with progressive customers who are focused on the same. We  have also released a framework to help hiring managers navigate the claims companies make around removing bias – it’s called the  FAIR Framework (Fairness in Recruiting)

PredictiveHire reduces bias by fulfilling the following requirements:

  1. PredictiveHire assessment is based on a structured interview, is fully blind and uses direct unprocessed text chat from a candidate. This is in contrast to video or audio assessments which have error rates up to 20% in transcribing voice to text, which leads to further errors and biases in input. Video and voice are also known to induce bias through seeing the candidate and hearing accents.
  2. The training data used to assess candidates is not based on any 3rd party data or historical customer hiring data and so carries no risk of latent demographic signals that could amplify bias. This purity of data gives every candidate the fairest chance of being considered  for the job.
  3. Our innovation in algorithmic bias mitigation, recognised at the global Ai conference CogX earlier this year, means that fairness is now baked directly into the model optimization at training time. 
  4. Our rule based candidate recommendation models are built on a combination of machine learning and optimization algorithms striking a fine balance between fairness, validity and expert defined “ideal candidate profile”. This is in stark contrast to the mainstream approach in many candidate screening systems that employ machine learning only models using past hiring and performance data, with bias testing as an afterthought. In our approach, being unbiased is a constraint that the model has to satisfy while finding the optimum model aligned to expert judgement and/or past hiring/performance outcomes. In other words, the algorithm is “fairness bound” in its exploration to find the most predictive model.
  5. We use two tests as constraints,  which is going beyond the EEOC guidance for adverse impact testing- – the 4/5th rule as well as Effect size.    

In contrast, video Ai tools have been legally challenged on the basis that they fail to comply with baseline standards for AI decision making, such as the OECD AI Principles and the Universal Guidelines for AI or that they perpetuate societal biases and could end up penalising non-native speakers, visibly nervous interviewees or anyone else who doesn’t fit the model for look and speech. For example, current video AI is more effective in capturing facial expressions in white males than other groups.

Transparency 

Transparency has been a principle and a value for us since we first started to build technology designed to fix the biased and broken hiring practices of today. 

Every customer can view their model card which shows transparently the features tested for bias that are inputs to their model, the protected groups the model is tested on, the norm data going into the model and the bias testing results. 

Additionally, the customer has access to insights related to bias through the funnel (i.e. applied, recommended and hired) via the PredictiveHire DiscoverInsights  dashboard. We are proud to be leading the market in transparency, including sharing our own standard for ethical use of Ai in the  FAIR™ Framework released publicly in 2020 which shows our own adherence to the principles of being unbiased, transparent, explainable and valid. 

We support the principles articulated in the EEOC Guidelines and will continue to be transparent with the market and our Customers on our science, our bias mitigation regime and our results.

See our press release welcoming the EEOC announcement here.

Do you want to
hire faster?

If the answer is “Yes” then watch the video to see how introducing AI into your hiring will make it 90x faster and deliver you brilliant results.

Love PredictiveHire

Get Loved by Candidates!

Don’t listen to us, read what real candidates who have experienced Predictive Hire say!

READ OUR REVIEWS!

RECENT BLOGS

Topic

Do you want to
hire faster?

If the answer is “Yes” then watch the video to see how introducing AI into your hiring will make it 90x  faster and deliver you brilliant results.

Get our insights newsletter to stay in the loop on how we are evolving PredictiveHire