A robot assessing your cover letter, is that the future of human resources? It would seem so to those who observe the rapid adoption of artificial intelligence in human resources and recruitment companies. It seems smart: instead of a possibly biased human being, you now get an objectively calculating robot that picks out the best resumes. But is that legally correct?
There are many software systems that can be used to screen applicants. Recognizing interesting resumes is popular, but also watching video calls whether someone reacts nervously, evasively or lyingly begins to arise more and more.
What all this software has in common is that it works with statistics: based on a whole mountain of data from previous applicants, common and distributive characteristics are identified, with which new applications are placed in the right container.
Objectivity
Objectivity is an important advantage for many companies. This way you know that your HR staff does not unwittingly reject women or apply other discriminatory criteria when selecting them. But is such software really that objective? After all, the data that feeds such a system comes from historical practice. And there may well be discriminatory criteria in between.
A well-known example comes from Amazon: their HR robot only hired men, because the company had only worked with men in the first fifteen years of its existence.
New legislation
The fear of such ‘algorithmic exclusion’ has led to the General Data Protection Regulation (GDPR) explicitly prohibiting the use of AI for the selection of applicants. Or well: having applicants selected solely on the basis of AI.
Because as a tool for the recruiter, who then makes his own judgment, it is allowed. It is of course difficult to prove that in practice the recruiter will mainly rely on the AI.
However, new, stricter legislation is on the way from Europe: the AI ​​Act. This science fiction name is a bit misleading, as it refers to all forms of algorithms and automatic judgment or decision-making about people. So also with a standard decision tree or a program that takes fixed steps and choices.
First and foremost is the impact that people can experience from such a system. If it is negligible, then the law allows its use without further rules. If the risks are unacceptable, then its use is simply prohibited.
Clarity
HR systems are in the middle: AI with high risk, but not unacceptably high. There are strict requirements for this, such as clear design documentation with risk management and built-in explanation modules, so that people understand why the system came to its conclusion. Accuracy must also be demonstrated in writing. O, and both the supplier and the user are liable for errors herein.
At the moment, such systems are therefore legal as an aid. The applicant has one handle: a company must explain to him or her why they reject him or her under the GDPR. ‘Our AI thought your competencies were below par and your experience too limited.’ Not nice to hear, but clear.
Text: Arnoud Engelfriet
.