EU AI Act: What the AI Regulation really means – overview & practical guide

Futuristic humanoid robot with visible circuits and cables, whose face is overlaid with the flag of the European Union (blue area with yellow stars); symbolic image for artificial intelligence and the EU AI Act in Europe.

The EU AI Act – the European Union's new AI regulation – is the first comprehensive legal framework for artificial intelligence. Its aim is ambitious: to enable innovation while minimising risks to health, safety and fundamental rights. The Regulation applies beyond the EU’s borders. Anyone who places AI systems on the EU market or uses their outputs in the EU is in scope – even if based outside Europe. In this article, we explain what the EU AI Act regulates, how the risk classes work, what obligations companies face, what sanctions are possible, and how you can get started pragmatically now.

What is the EU AI Act about?

The AI Regulation defines for the first time what an AI system is: a machine-based system that derives predictions, recommendations or decisions from data and operates with a certain degree of autonomy. The central principle is the risk-based approach. Not all AI is treated equally – the higher the risk to people or public safety, the stricter the requirements.

Some applications are prohibited per se. These include social scoring systems based on the Chinese model, manipulative systems that induce people to engage in harmful behaviour, and real-time biometric identification in public spaces (with very narrow exceptions). The legislator is thus setting clear red lines.

In addition, the EU AI Act is aimed at all roles along the value chain: providers/manufacturers, importers, distributors – and also users (deployers), i.e. companies that use third-party AI. This is important because compliance does not end at the workbench, but extends to operational use.

The risk classes – and why they matter in everyday life

The regulation distinguishes between four categories. Unacceptable risk is prohibited, period. High-risk AI remains permissible but is strictly regulated when used in sensitive areas such as medical devices, critical infrastructure, employment (applicant screening, performance evaluation) or credit scoring. Limited risk mainly concerns transparency requirements, for example in chatbots or generative AI: users must be able to recognise that they are interacting with AI or that content is AI-generated (keyword: deepfakes). For low risk – e.g. spam filters, simple assistance functions – the EU AI Act does not impose any special requirements, but good practice (monitoring, human reassurance) remains advisable.

In practice, the intended purpose determines the classification: the same model can be classified as a high-risk system or pose only a low risk, depending on its use. Someone who uses image AI for marketing graphics has different obligations than someone who uses it to make medical diagnoses.

What companies must do depending on their risk level

The obligations for high-risk AI are extensive, but manageable if approached systematically. Requirements include risk management throughout the entire life cycle (development, operation, monitoring), data governance with a focus on the quality and representativeness of training, validation and test data, detailed technical documentation and logging, transparency towards users (purpose, functional limits, residual risks) and human oversight with the possibility of intervention or shutdown. In addition, there are requirements for robustness, accuracy and cybersecurity.

Before a high-risk system is placed on the market or put into service, a conformity assessment is usually required – with CE marking and, where applicable, registration in an EU database.

Companies that only use AI are likewise responsible: they must ensure that the AI they use is compliant, follow the instructions for use, assess risks in their own context and monitor the system. Providers remain responsible for development, conformity and updates.

Where the risk is limited, the focus is on transparency: users must know that they are talking to AI, and AI content must be recognisable as such. This can be achieved pragmatically – through clear indications in the interface, watermarks or metadata in media content. Here, too, what sounds simple requires clean processes, otherwise transparency will be patchy.

Generative AI: transparency without hindering innovation

Generative AI (text, image, audio, video) is currently the focus of public attention. The AI Regulation requires AI-generated content to be labelled and appropriate safeguards to be put in place to prevent deception. This presents both an opportunity and a challenge for companies: marketing, content production and internal knowledge work stand to benefit enormously – but without transparent labelling and internal guidelines, there is a risk of reputational damage and regulatory risks. Those who establish standards for prompts, approvals, labelling and logs at an early stage will be able to scale generative AI safely.

Sanctions: it can get expensive quickly

The EU AI Act provides for heavy fines, based on the logic of the GDPR:

  • Operating prohibited systems can be punished with fines of up to €35 million or 7% of global annual turnover.
  • Breaches of obligations – such as a lack of conformity assessment, inadequate documentation or a lack of transparency notices – can result in fines of €15 million or 3%;
  • formal breaches can result in fines of €7.5 million or 1%.

In addition, there is a risk of sales bans, recall obligations and significant reputational damage. Given the staggered transition periods, now is the right time to establish compliance – not when the clock is already ticking.

Schedule

  • The AI Regulation has been in force since 1 August 2024;
  • initial prohibitions and AI literacy requirements have been in force since 2 February 2025,
  • further requirements (including on GPAI, governance, sanctions) since 2 August 2025;
  • most obligations apply from 2 August 2026,
  • additional product integration obligations for high-risk AI from 2 August 2027.

How to get started: five steps without bureaucratic overkill

Start with an AI inventory: Which systems are in use, for what purpose, with which data and interfaces? Assign these use cases to risk classes and clarify your role (provider, user, importer/distributor). On this basis, establish lean AI governance: an AI policy, clear responsibilities (e.g. an interdisciplinary AI board), defined processes for risk assessment, incident handling and change management.

From a technical and organisational perspective, you establish data governance, bias controls, model monitoring and logging; for high-risk cases, you plan conformity assessments and CE marking at an early stage. At the same time, you train the teams – AI literacy is not a luxury, but a prerequisite for effective implementation. Important: Integrate data protection (GDPR), information security and purchasing/contracts; many requirements can be elegantly anchored there (e.g. AI clauses in procurement contracts, checklists in the ISMS).

Those who take a pragmatic approach will quickly realise that the EU AI Act is not necessarily a barrier to innovation, but rather a regulatory framework that can build. Companies that invest today will implement faster tomorrow – and use regulatory hurdles as a seal of quality.

Conclusion

The EU AI Act sets the new standard for AI compliance. It is crucial to assign risk classes correctly, implement obligations proportionately and establish processes along the AI life cycle. This allows legal certainty, user trust and innovation to be combined. If you would like to develop a roadmap, carry out specific risk assessments or set up a suitable compliance and documentation system, we would be happy to support you.

Summary of the key facts

  • Focus on generative AI: The EU AI Act requires clear labelling of AI content (text/image/audio/video), warnings for deepfakes and, where possible, watermarks/metadata. Users must be able to recognise that they are interacting with AI or that content has been artificially generated/manipulated.
  • How to implement it: Embed content labelling in CMS/workflows, activate auto-tagging & model logs, standardise warning texts, human review for sensitive content, vendor/model selection with contractual commitments (GPAI obligations, IP/copyright, security).
  • Start now, reduce risks: AI inventory & labelling policy, training for editorial/marketing/HR, data protection compliance. Violations can be expensive (up to €15 million/3% or, in the case of bans, €35 million/7% of turnover).