AI to support management decisions
We already discussed AI in this blog at the end of 2022. At that time, we examined the question of “liability for damages caused by artificial intelligence: What does HR need to prepare for?”.
This article addresses another aspect of AI usage at companies: the management liability when using AI – an issue which is becoming increasingly relevant and controversial. The duties of board members, managing directors and supervisory board members is becoming increasingly complex in the face of growing challenges such as digitization and the current global economic situation.
Therefore, it is understandable when managers draw on support to enable them to complete their duties efficiently. Why not turn to AI? Especially when it comes to data-driven issues, AI can serve as an especially helpful means of efficiently preparing a basis for decision-making.
AI as a non-human decision-making aid
Yet what happens when AI makes “mistakes” and produces unusable results which then form the basis for incorrect and damaging business decisions? Do human decision-makers have to take responsibility for these errors?
This is where the established liability system anchored in statutory tort law reaches its limits. Because AI systems make decisions based on data and algorithms – and without human input. Transferring legal responsibility to “the AI” does not work.
Emerging EU law for AI usage
With a view toward whether and how AI is used at companies, managers have freedom within the scope of their decision-making authority to decide whether a human or an AI prepares the information serving as the basis for decision-making.
However, a future EU regulation (KI VO-E) will stipulate a legal framework governing the use of AI at companies. The current draft envisages a range of obligations concerning the use of AI. Violating these obligations may result in penalties and may also affect the liability of the responsible parties.
How to work with AI?
Simply forgoing the use of AI due to the potential liability risks is not a viable solution over the medium term. As such, it is essential to find an approach which efficiently supports management duties yet does not give rise to incalculable liability risks.
If this is to succeed, it is essential to document the AI systems used or planned in order to carry out a risk assessment (covering data protection, IT security, trade secrets, etc.) with the responsible bodies throughout the company.
If AI is used, its activities need to be as traceable as possible and also precisely documented. At the same time, it is vital to ensure that AI does not take actions and make decisions autonomously but that these are controlled by qualified human specialists. Last but not least, it is important to create structures and responsibilities which monitor the aforementioned aspects.
In other words, a reliable AI governance needs to be established.
No flying blind when using AI
Taking a seat of your pants approach is destined to fail when using AI within the company. Utilizing AI on the executive level has to rely on close collaboration with IT, data management, as well as legal & compliance to minimize liability risks.
Deploying suitable AI systems with maximum reliability avoids liability for bad decisions based on incorrect AI calculations: If the managers have exercised the due diligence of a prudent and conscientious business manager when deciding on and working with AI as part of their management duties (cf. Section 93 (1) Sentence 1 of the German Stock Corporation Act (AktG)), then they are on the safe side.
Summary of the key facts:
- AI may be used when making management decisions.
- When introducing and using AI systems, the responsible parties have to work closely with IT, data management, and legal & compliance (AI governance).
- If managers act with the due diligence of a prudent and conscientious business manager, this minimizes the liability risks arising from wrong AI results.