Who is responsible if AI makes mistakes in the company?

Who is responsible if AI makes mistakes in the company?

Photo: Freepik / DC Studio

In the modern corporate landscape, artificial intelligence (AI) is no longer just a field of experimentation, but rather a central driver of efficiency and innovation. Whether automated lending, AI-supported diagnoses in medical technology or predictive maintenance systems in industry, algorithms are increasingly making decisions with far-reaching consequences.

But where there is light, there is also shadow: When an autonomous system makes wrong decisions, succumbs to data hallucinations or reproduces discriminatory patterns, one of the most pressing questions of digital transformation arises: Who is responsible? The answer to this is complex and moves between technical autonomy, human oversight and a rapidly evolving regulatory framework.

Traditional liability models at their limits

Traditional liability models often reach their limits when it comes to AI systems. In classic product liability, a clear causal connection between a design defect and damage is assumed. However, modern machine learning models, especially deep neural networks, make this more difficult Black box effects this attribution. Since the AI’s decision-making paths are often no longer comprehensible in detail even for developers, significant proof problems can arise. As a result, companies find themselves in the situation where they reap the benefits of technology, but at the same time have to deal with risks that are difficult to predict. In legal science there is therefore increasing discussion about easier evidentiary measures, rules of presumption and, in some cases, stricter approaches to liability; However, general strict liability for AI does not currently apply across the board. This means that organizations should re-evaluate their risk management systems and see technical monitoring and control bodies as an integral part of their digital processes.

A first step towards AI regulation

Against this background, the EU AI Regulation (AI Act) becomes particularly important as a regulatory anchor point because it addresses precisely the interface between liability issues and preventive regulation. The approach follows a risk-based logic: the higher the potential risk of an AI system to fundamental rights and security, the stricter the requirements. For companies, this means a detailed classification requirement. “High-risk AI systems” in particular, for example in human resources management or when assessing creditworthiness, are subject to massive requirements.

become a company obliged by the AI ​​regulationin particular to ensure the following points:

  • Risk management systems: Establishing continuous processes to identify and minimize risks throughout the entire life cycle.
  • Data governance: Using high quality training, validation and testing datasets to avoid bias.
  • Technical documentation: Providing all information necessary for authorities to assess compliance.
  • Human supervision: Implementing interfaces that allow natural persons to monitor or correct AI decisions.

The regulation shifts the burden away from purely ex post claims settlement towards a preventive compliance structure that holds companies accountable as early as the design phase.

People remain responsible

Despite advancing automation, it remains Human in the loop the central dogma of ethical and legal AI design. Responsibility cannot be completely delegated to the algorithm. In companies, this manifests itself in the obligation to set up qualified supervisory structures. It’s not enough to have a stop button in front of you; the delegated employees must have the necessary competence to understand the logic of the system and detect anomalies. A new professional profile is emerging here at the interface of data science, ethics and law. If an AI in the recruiting process systematically disadvantage certain population groupsthe responsibility lies with the company if it has not implemented bias detection mechanisms. Case law tends to assume organizational negligence when companies blindly trust algorithmic recommendations (automation bias). In this context, responsibility primarily means maintaining human judgment across the digital process chain in order to preserve the autonomy of the individual compared to the machine.

No single culprit in complex AI systems

An often underestimated aspect of the question of responsibility lies in the complexity of digital supply chains. Modern AI applications are rarely monolithic products from a single vendor; they are based on open source libraries, cloud infrastructures and pre-trained base models. If an error occurs, the question arises as to whether the suppliers are complicit. The AI ​​Act addresses this through transparency obligations along the value chain. Companies must ensure that they are not only liable for the end product, but also can verify the integrity of the purchased components.

Important dimensions of this systemic responsibility are:

  1. Duty of care when selecting: Audit providers for certifications and compliance with ethical standards.
  2. Transparency of models: Demand explainability (Explainable AI) from software partners.
  3. Ongoing monitoring: Continuous testing of model performance under real conditions (model drift).

Ultimately, the discussion moves away from the search for a single scapegoat to one shared responsibility (Shared Responsibility). In a connected economy, liability becomes a matter of contractual design and technical validation, with the applying company always retaining primary responsibility to the end user and society.

FAQ – concrete practical cases on AI liability answered

Who is liable if an AI makes the wrong credit decision?
In principle, the company that uses the AI ​​is liable. It is obligatory to understand how they work and to manage risks appropriately, even with external software. The provider may also be liable, but this depends heavily on the contractual agreements.

What happens if an AI discriminates in recruiting?
If an AI systematically disadvantages certain applicants, this can be viewed as discrimination. Companies are particularly liable if they have not implemented suitable measures to identify and avoid bias. Organizational negligence is often assumed here.

Who is responsible for errors made by external AI service providers?
The company remains responsible even when using external tools. It must ensure that the systems used comply with regulatory requirements. At the same time, suppliers may be jointly responsible under contracts or product liability.

What happens if an AI gives incorrect medical recommendations?
Particularly strict requirements apply in sensitive areas such as medical technology. The responsibility usually lies with the operator of the system and the manufacturer. What is crucial is whether sufficient control mechanisms and human supervision were in place.

Can a company claim that AI cannot be explained?
No. The lack of traceability does not relieve responsibility. On the contrary: Companies must ensure that they can explain and control the AI’s decisions at least to an appropriate extent.

Who is liable for so-called “hallucinations” of AI systems?
If an AI system provides false or fabricated information and damage occurs as a result, the company that uses or distributes this content is usually liable. What is crucial is whether appropriate testing mechanisms have been implemented.

What role does the AI ​​Act play in liability?
The AI ​​Act primarily regulates preventive obligations and not classic liability. However, it obliges companies to undertake risk management, documentation and control. Violations can lead to sanctions and have an indirect impact on liability issues.

What does “human-in-the-loop” mean in practice?
It means that a human must have the ability and competence to review and intervene in AI decisions. A purely formal control mechanism without real professional assessment is not enough.

How can companies specifically protect themselves?
Important measures are:

  • Establishing structured risk management
  • Selection of verified and transparent providers
  • Training employees in using AI
  • Continuous monitoring of the systems

Recent Articles

Related Stories