close

1: AI Act

The European AI Regulation, also known asthe"AI Act,"represents a decisive shift in the regulation of digital technology. Whereas companies were previously able to implement artificial intelligence relatively freely, the AI Act will require Danish organizations to work in a more structured, documented, and transparent manner with AI-based systems.

The AI Act is not just another law with legal provisions and paragraphs, but a framework that defines what responsible AI is and, thus, how companies should build trust and quality around their AI solutions in the future.

This article provides a comprehensive, strategic overview of the regulation and its practical implications for businesses.

The purpose of the AI Act

Although the AI Act is often mentioned in connection with regulation and compliance, its primary purpose is not to restrict companies, but to create a controlled and secure framework for the development and use of AI systems. This is mainly due to the speed at which AI has developed in recent years and the societal risks that come with it. The AI Act aims to ensure:

  • Protecting citizens' fundamental rights
    AI must not violate privacy, lead to discrimination, or restrict citizens' rights without legal authority.
  • Transparency in AI systems
    Companies must be able to explain how AI works, especially when the output has an impact on people.
  • Safe and responsible innovation
    The regulation aims to strengthen trust in AI by protecting both users and businesses from unintended consequences.
  • Uniform rules across the EU
    The AI Act is designed as a regulation to avoid national special rules and create a stable digital single market.

 

In other words, the AI Act has been created to ensure that AI is implemented responsibly, transparently, and with respect for fundamental societal values.

 

Implementation timeline

Although the AI Act is comprehensive, it is being rolled out in stages to give companies time to adapt their processes, systems, and documentation. However, this also means that certain parts are already in effect today.

Time Key points
August 2024 The AI Act came into force and the first general obligations had to be implemented.
February 2025 Prohibition of AI with unacceptable risk. Requirements for providers of general AI models.
August 2025 Rules for post-market surveillance will apply.
February 2026 Special requirements for selected large-scale information systems.
August 2026 High-risk AI is regulated in practice. Requirements for documentation, quality, and control.
By 2030 Full implementation is expected. Possible tightening and new interpretations.

 

The EU's risk-based classification of AI 

The AI Act is based on a risk-based approach, which creates differentiated regulation depending on the impact of the AI system on people and society. The classification is key because it determines the requirements that apply to documentation, responsibility, and operation.

1. Unacceptable risk (prohibited – no exceptions)

This category covers AI systems that are considered harmful or unethical. They pose significant risks to citizens' civil liberties and trust in societal institutions. Examples include:

  • Social scoring, where citizens are ranked hierarchically according to behavior or data.
  • Manipulation of human behavior, e.g., political influence through AI-generated content.
  • Real-time biometric identification, which could lead to surveillance of the population.
  • Emotion recognitionused in work contexts or teaching.

These systems are completely banned across EU member states.

2. High risk (strictly regulated)

High-risk systems are used in situations where AI can have a significant impact on citizens' lives, safety, or finances. Therefore, high standards are set for documentation, quality, transparency, and human oversight. Areas include, among others:

  • HR and recruitment, where AI evaluates or rejects candidates.
  • Finance and credit rating, where decisions affect financial access.
  • The healthcare sector, where AI can support diagnoses.
  • Critical infrastructure, e.g., energy supply, water, transportation, and emergency response.
  • Social services, e.g., assessment of citizens' eligibility.
  • Education, including automated grading.

 

For these systems, companies must document, among other things, the data basis, risk analysis, operating procedures, and human oversight.

3. Limited risk (primarily transparency requirements)

This category includes AI systems that do not have a direct or significant impact on citizens' rights, but which must still be used transparently. Typical examples are:

  • Chatbots thatinteract with customers or employees.
  • Deepfakes in marketing or communication, wherethe user must be informed.
  • Generative AI thatproduces text, images, or audio.

 

The key requirement here is that users are clearly informed that the content is AI-generated.

The AI Act as a strategic tool

The AI Act should be seen as an opportunity to professionalize the company's approach to AI. The regulation provides companies with a structured framework for:

  • Responsible development
  • Risk management
  • Documentation
  • Transparency
  • Data quality
  • Governance

 

Companies that implement these principles early on will be in a stronger position in relation to customers, partners, and authorities, and will experience a much more mature, secure, and scalable use of AI.