It should not be surprising then that language is also the basis of most traditional forms of personality testing.
The lexical hypothesis is a thesis, current primarily in early personality psychology, and subsequently subsumed by many later efforts in that subfield. Despite some variation in its definition and application, the hypothesis is generally defined by two postulates.
The lexical hypothesis is a major foundation of the Big Five personality traits, the HEXACO model of personality structure and the 16PF Questionnaire and has been used to study the structure of personality traits in a number of cultural and linguistic settings.
Noam Chomsky summed up the power of language nicely:
“Language is a mirror of mind in a deep and significant sense. It is a product of human intelligence … By studying the properties of natural languages, their structure, organization, and use, we may hope to learn something about human nature; something significant …”
Where chatbots can be programmed to provide answers to basic questions real-time, so that your people don’t need to do that, these answers are canned answers to basic questions delivered through text. They lack the smarts to truly discover what your text responses say about you. The engagement between the chatbot and the individual is purely transactional.
Conversational AI is more about a relationship built through understanding, using natural language to make human-to-machine conversations more like human-to-human ones. Conversational AI offers a more sophisticated and more personalized solution to engage candidates through multiple forms of communication. Conversational AI gets smarter through use and connects people in a more meaningful way.
Put simple, Conversational Ai is intelligent and hyper-personalised Ai, and in the case of ‘Phai’, (PredictiveHire Ai) its is underpinned by provable and explainable science. We have already published our peer-reviewed scientific research which underpins our personality science.
The scientific paper may not make it to your reading table, although you can download it here (“Predicting job-hopping likelihood using answers to open-ended interview questions” ) but the business implications cannot be ignored.
According to one report, voluntary turnover is estimated to cost U.S. companies more than $600 billion a year due to one in four employees projected to quit and to take a different job. If your turnover is even a few basis points above your industry average, then being able to leverage conversational Ai as the tool for screening will save your business $’s.
Our research used the free-text responses from 45,899 candidates who had used PredictiveHire’s conversational Ai. Candidates had originally been asked five to seven open-ended questions on past experience and situational judgment. They also responded to self-rating questions based on the job-hopping motive scale, a validated set of rating questions to measure one’s job-hopping motive. The self-rating questions were based on the job-hopping motive scale, a validated set of rating questions to measure one’s job-hopping motive.
We found a statistically significant positive correlation between answer text and self-rated job-hopping motive scale measure, and that the language inferred job-hopping likelihood score had correlations with other attributes such as the personality trait “openness to experience”.
Ai, that is the bridge between HR and the business. It is this kind of quantifiable business ROI that distinguishes traditional testing with Ai models.
To keep up to date on all things “Hiring with Ai” subscribe to our blog! 😀
You can try out PredictiveHire’s FirstInterview right now, or leave us your details here to get a personalised demo.
Get our insights newsletter to stay in the loop on how we are evolving PredictiveHire