According to the digital association Bitkom, 21 percent of companies utilize artificial intelligence in human resources. When carrying out the initial selection of applicants, smart software evaluates resumes and chat bots carry out initial interviews. People analytics supported by big data enable the analysis of vast quantities of data from diverse sources ranging from social media to internal databases in order to predict the suitability of applicants. This not only accelerates the recruiting process. In the best case, companies create a permanently self-optimizing factual foundation for strategic personnel decisions.
The benefits primarily lie in the quantity and quality of the training data for the intelligent systems, along with the knowledge about which data best support the decision-making process. The crux of the issue: If algorithms are utilized which are capable of machine learning and creating connections independently with the help of artificial neural networks, their behavior can be neither precisely predicted nor tracked. Instead, this is only possible with a certain degree of probability.
The AI regulation from the European Commission creates greater legal security
This also makes it difficult to evaluate whether an AI solution fulfills the legal requirements. According to Bitkom, 54 percent of the companies demand help with the legal and ethical evaluation of their AI usage. The draft of an AI regulation by the European Commission provides the first step towards greater legal security: This classifies HR management applications in the “high risk” category and stipulates requirements concerning data quality, documentation and transparency, human supervision, precision and robustness against hacker attacks. Taken together with the requirements set forth in data protection and labor law, this gives rise to 5 important areas of activity for companies to minimize liability risks:
1. Create transparency during the planning phase itself
Which data is collected for the AI and for what purpose? Are incorrect conclusions possible? To avoid violating the General Act on Equal Treatment, Human Resources managers need to talk to the technicians and obtain well-founded information about how the algorithms used function and are trained. For example, there is a risk that the algorithm will discriminate against women who have more gaps in their resume than men due to time spent taking care of relatives or raising children. To avoid lawsuits claiming compensation for damages, minorities need to be adequately represented in the training datasets. It is also important to ensure that the labeling of the data itself is neither racist nor sexist.
2. Analyze data protection risks
Data protection risks can be mitigated by technical measures such as the anonymization or pseudo-anonymization of personal data. However, in the case of big data analyses, is extremely difficult to eliminate personal links. As a consequence, a data protection impact assessment in accordance with Article 35 of the General Data Protection Regulation (GDPR) is advisable.
If personal references cannot be prevented, issues such as the following need to be clarified: Has consent been issued that covers the use of the data for the AI? Are the scope and purpose of the data analysis described and implemented as transparently and as understandably as possible? Does the fundamental technical configuration of the algorithm implement privacy by design and default to ensure that only the personal information is processed which is absolutely necessary to recruit applicants? Are the applicants informed about the automated decision making, its logic, scope and objectives?
3. Involve the works council
AI solutions in HR management number among the technical systems with which the behavior or the performance of applicants can be monitored. Therefore, the participation rights according to Section 87(1) No. 6 Works Constitution Act (BetrVG) apply at companies with a works council. In addition, the Works Council Modernization Act implemented on June, 18 2021 strengthens these co-determination rights: For example, Section 80 (3) BetrVG stipulates that the works council may commission an expert in questions regarding the use of AI for whom the employer bears the costs. The objections that the works council possesses the necessary knowledge itself or that the involvement of inexpensive expert is unjustified do not count.
4. Work towards a company agreement When planning intelligent systems, human resources
managers should work towards concluding or modifying a company agreement in order to regulate the prerequisites, decision-making criteria and consequences of AI usage at the company. This serves to preserve both co-determination and other employee rights while also legitimizing the data processing in the employment context in accordance with Article 88 (1) GDPR.
5. Human beings remain the final instance
Just like humans, even intelligent digital colleagues make mistakes. That is why the lawmaker restricts the scope of exclusively automated decisions in Article 22 GDPR. In contrast to human discrimination, AIs can structurally discriminate as a consequence of incorrect design. Alone for ethical reasons, digital colleagues should have consistent human supervision. Ideally, AI helps humans to make more objective decisions.
Now is the time to act: HR managers should use the AI regulation from the European Commission and the Works Council Modernization Act as an opportunity to weigh up the opportunities and risks of artificial intelligence and human resources, to analyze the legal framework and implement this in the processes in collaboration with the technicians.