An Introduction to AI and its use in Talent Measurement

Video assessment and competency-based AI scoring

Companies are embracing the use of video interviewing assessment software used with pre-validated AI scoring for competency-based assessment. This combined approach provides the opportunity to evaluate talent soft skills in an unstructured question and answer format with the benefit of assisted data for decision making.

When you consider the use of video interviewing combined with AI evaluation support you will immediately consider bias with the use of visual elements in video. However, what makes videoBIO’s approach to machine training and AI different is the elimination of any visual data in the training dataset or collection of data. videoBIO generates AI assessment based on an audio-only feed from the video generating a text transcript of the interview response and eliminating the use of facial or expression analysis, ensuring that only the language in the response transcript is assessed by the machine.  

To further eliminate any risk of bias, no personally identifiable information is used in the dataset including name, gender, email, location, race or demographic data is collected or assessed.

This is achieved in the method that is used to assess for competencies. Through the use of competency assessments, competency scoring focuses exclusively on data from situational questions related to the competencies you are seeking in an ideal candidate including communication, performance, attitude, open mindedness, diverse thinking, emotional intelligence, decisiveness, time and self management defining working style, personality and other soft skills. videoBIO offers selectable competency assessments that are pre-trained based on specific competencies.

The data is collected from the responses to the questions asked in addition to multiple choice gamified questions that candidates respond to in the process of the assessment response.

By combining video interviews with predictive artificial intelligence (AI), you can capture insight into a candidate’s working style, fit for position and how they interact with others while benefiting from the efficiency of supporting analytics in your talent decision making process.

The score generated by the algorithm can be reviewed by recruiters or used in an automated decision making flow, allowing you to quickly identify and prioritize the highest quality and fit candidates with efficiency.

Assessing AI, its validity and performance

videoBIO pre-validates its assessment test results allowing companies to select and use our competency assessments with confidence. We offer a range of competency tests including Diversity, Trust and Self Standards, Work Style, Performance, Customer Service, Personality and others.

We are committed to the validity of the results generated from our algorithm. We validate our machine prediction results in the following ways:

1. Each candidate completes both structured personality index selectors (PI) and unstructured (ML) tests, for each competency being measured. The candidate receives a score for each competency from both the structured and unstructured tests. 
2. The scores from each test are then cross-referenced to identify distance (the difference) between them and to thereby calculate the existing correlation (or relatedness).
3. The validity of machine-based scores are measured by correlation between AI scores and other traditional test scores.

Metrics are used to show the performance of the machine algorithm and training.

The performance of the machine learning model is analyzed using three metrics; the F1 Score, Precision and Recall which measure the accuracy of the machine predictions, measured from 0 worst to 1.0 best.
The F1 Score is the harmonic mean of Precision and Recall. Importantly, the harmonic mean eliminates outliers (as opposed to averaging) and is therefore the more useful measurement.
 
Measurement metrics and indicators of a highly accurate machine algorithm 

Industry standard benchmarks for machine performance in like environments with unstructured, phrase-based machine training used with NLU show F1 benchmarks in the range of 0.35- 0.65. videoBIO’s F1 score for machine performance for the pre-trained competency assessments is 0.75, falling well above the industry average.

We are able to achieve this based on the quality and specificity of the domain data collected and assessed in our assessment questionnaires and corpus dataset. These data collection measures include:

1. The way questions are asked 
2. The type of questions asked
3. The focus of the assessment subject matter
4. The length of the response required

To learn more about videoBIO’s pre-trained AI assessments or custom trained AI environments for companies who are interested in training based on their own interview data, please contact us for a demonstration.