employees in your target role? If they don’t know
your employee outcomes, how can they predict your
outcomes? They can’t. Most job roles have multiple
KPIs that describe performance—do they predict each
of these separately?
5. Does the Solution Base Predictions on Outcome
Data or a Job Fit, Job Match or Job Blueprint Survey?
Data Science predicts what you ask it to predict. If you
want lower attrition or higher KPIs, the models must be
trained and validated with those data alone. The process
looks for fact-based patterns to drive your business.
Surprisingly, many solutions don’t use this approach, but
fall back to managerial bias.
6. Does the Solution Use Machine Learning to
Recalibrate Your Predictive Models? How Often?
Business needs, role descriptions, and culture changes
over time. Local labor conditions change. For example,
Service Representatives may be incentivized to cross-sell related products, or new regulations may require
new compliance to be performed.
7. The New Validation Question: Criterion Validation?
HR has been taught to ask if the assessment is
validated. The first level of validation checks whether
the assessment measures are self-consistent. Continue
to ask this question. But ultimately you care about
whether the assessment feeds predictions that
accurately correspond to improved business outcomes.
That is, are the predictions actually working?
8. Can You Easily Access / Download Your Company’s
Talent Assessment Data? Talent assessment data is
a critical dataset for your company. If your Talent
Assessment vendor makes it difficult or impossible
to access your talent assessment data—this is a good
indication they are using pre-predictive technology and
that they don’t appreciate that this data is your asset.
9. How Easy is it to Deploy the Solution into the Talent
Acquisition Process and Use the Predictions? How
much training is required? Do your talent acquisition
professionals need to read long text reports, or get out
a calculator to use the predictions? The complexity
of a prediction should be kept out of the way of daily
operations. If your team still needs to “think” about what
the answer is, it is probably not a predictive solution.
10. Is there a Different Assessment for Every
Role? Or One Assessment with Multiple Predictive
Models? Multiple assessments make it impossible
to predict one candidate’s performance against
multiple roles. This may also be a signal that you
are working with an older, legacy (less predictive)
talent assessment supplier.
11. Is There an Answer Key for their Solution on
the Web? For many assessments, there are answer
keys and guides on how to fool or pass the test.
A data science-driven model would be custom to
your role in your company, and be continuously
evolving—therefore very difficult for answer keys
and spoofing to catch.
12. Does the Company Itself Specifically Tell You Not
to Use their Solution for Hiring / Talent Acquisition?
Some assessments, notably the Myers Briggs survey,
specifically implore users to not use the tool for
talent acquisition: “It is not ethical to use the MBTI
instrument for hiring or for deciding job assignments.”
13. Ask to see their company policy on employee
predictive modeling, discrimination, disparate impact
and fairness. It is important that a predictive solution
has thought through the specific outcomes of their
models and how they fit into creating fair opportunity
for all applicants. In particular it is vital for the solution
to satisfy government requirements for hiring.
14. Do Your Own (Internal) Data Scientists Approve of
this Predictive Solution? We recommend asking one
of your own data scientists (from HR, marketing, or
another area inside your own company) to accompany
you in your evaluation. They know what is a rigorous
approach and what is marketing fluff.
15. How Does the Predictive Solution Regularly Prove
to You that the Models Are Working? Ideally the
company you select will be able to show you two to
four times a year how your predictions are working.
Only use a predictive model during talent acquisition if
the predictions are accurate. If they’re not—you should
stop using the models. You need this feedback.
Greta Roberts is CEO of Talent Analytics.