The rise of HR technology and artificial intelligence is often seen with skepticism.

 Rules for fair and trustworthy AI in human resource management.

The rise of HR technology and artificial intelligence is often seen with skepticism.

While artificially intelligent HR technology tools sound promising for companies, employees and the works council often regard them with skepticism. How to create transparency in order to allay fears and counter skepticism?

Whether recruiting, answering simple employee questions via a chatbot, performance management, internal job matching, or tailored training – HR tech can utilize artificial intelligence (AI) for a diverse range of applications. Ideally, AI improves the human and social facets of HR work. It not only handles small-scale administrative tasks and creates greater freedom for individual employee counseling and support. Digital tools also enable strategic personnel planning based on data-driven analysis rather than subjective assessment, gut feelings and intuition.

Uncertainties regarding AI

According to the results of a study conducted by the German Association of Human Resources Managers (BPM) and the HR Tech Ethics Advisory Board among HR managers and employee representatives, one-third of HR departments already utilize digital assistants, are in the pilot phase, or plan to implement them in the near future. Whereas HR employees see these as a major potential for improvement, works councils are more critical of the use of AI. The majority of the respondents, but fewer employee representatives than HR managers, consider training and explanations regarding how the applications work to be sufficient. Both parties would like to see greater security and commitment when these applications are introduced through binding guidelines concerning the use of corresponding technologies.

The world’s first legal framework for AI from 2023?

The legislator also sees a need for action. As reported, the EU Commission already presented a proposal for the regulation of artificial intelligence on April 21, 2022. As the world’s first regulatory framework for AI, it could come into force as early as 2023, with a one-year transition period until it is applicable. There is no need to transport this into national law. This draft categorizes human resource management applications “high risk”. As we have already reported, companies need to take this development into consideration and keep an eye on five areas of action to meet future requirements governing data quality, documentation and transparency, human oversight, accuracy, as well as robustness against hacker attacks.

Developing company policies

However, the study also states that the requirements do not necessarily have to be of a legal nature in order to improve the level of trust. Assistance from expert bodies such as the HR Tech Ethics Advisory Board is also accepted. Building on this foundation, the companies can then develop their own guidelines, which more than half of the study participants regarded as necessary. Ultimately, the way a specific company utilizes a system primarily determines whether the system is trustworthy or not.

The more complex the application, the more caution is required

Users of HR analytics software need to understand how the applications work and critically examine their effectiveness. Which data is collected for the AI and for what purpose? Are incorrect conclusions possible? Not every tool advertised as Artificial Intelligence is actually an AI. However, if strong AI algorithms are utilized which are capable of machine learning and autonomously creating connections with the help of artificial neural networks, their behavior can be neither precisely predicted nor tracked. Instead, this is only possible with a certain degree of probability. The checklist for trustworthy AI from the High Level Expert Group on AI enables companies to check the AI systems using guiding questions. The KIDD research project of the German Federal Ministry of Labor and Social Affairs also provides guidance on how companies can introduce AI in a transparent and legally secure manner.

The works council has a say

AI solutions in personnel management also count as technical systems which are capable of monitoring the behavior or the performance of applicants. Therefore, the participation rights according to Section 87 (1) No. 6 Works Constitution Act (BetrVG) apply at companies with a works council. Section 95 (2a) BetrVG also clearly states that the works council has a say when companies use AI as part of their application process. The Works Council Modernization Act has also strengthened these rights: For example, Section 80 (3) of the Works Constitution Act (BetrVG) stipulates that employee representatives may commission an expert in questions regarding the use of AI and that the company is required to bear the costs. The objection that the works council possesses the necessary knowledge itself or that the involvement of an expensive expert is unjustified do not count. Although negotiations are often tough when companies involve employee representatives, the best way to address the concerns and reservations regarding AI is for HR managers to work towards a company agreement right from the outset. Similarly, companies without a works council are advised to implement a corresponding framework regulation. For example, this regulation can specify the following: HR analytics software only ever utilizes aggregated data and, therefore, cannot be used for individual performance monitoring. Another advantage of this approach: Data processing in the employee context pursuant to Art. 88 (1) of the GDPR can also be legitimized in the same way.

Horror scenarios such as “fired by the algorithm” cannot become reality in Germany and Europe because GDPR, employee data protection, the Works Constitution Act and, in the future, the EU AI Regulation set clear limits. In addition, humans always remain the final authority when it comes to making decisions. Nevertheless, implementing HR tech will only succeed if employers manage to transparently show how these systems are tracked and monitored.