The key to understanding the AI Act's division of responsibilities
1: AI Act
The European AI regulation also called “AI Act” – represents a crucial shift in the regulation of digital technology. Where companies could previously implement artificial intelligence relatively freely, the AI Act will in future require Danish organizations to work more structured, documented and transparent with AI-based systems.
The AI Act is not just another law with law and clauses, but a framework that defines what responsible AI is, and thus also how companies must build trust and quality around their AI solutions in the future.
This article provides a comprehensive, strategic overview of the regulation and its practical significance for business.
Purpose of AI Act
Although the AI Act is often referred to in the context of regulation and compliance, its primary purpose is not to restrict businesses, but to create a controlled and secure framework for the development and application of AI systems. This is mainly due to the speed at which AI has developed in recent years and the societal risks that come with it. The AI Act must ensure:
-
Protection of citizens' fundamental rights
AI must not violate privacy, lead to discrimination or limit citizens' rights without legal authority.
-
Transparency in AI systems
Companies must be able to explain how AI works, especially when output matters to people.
-
Safe and responsible innovation
The regulation is intended to strengthen trust in AI by safeguarding both users and companies against unintended effects.
-
Uniform rules across the EU
The AI Act is designed as a regulation to avoid national special rules and create a stable digital single market.
In other words: the AI Act was created to ensure that AI is implemented responsibly, documented and in respect of society's fundamental values.
Timeline for implementation
Although the AI Act is comprehensive, it is rolled out in stages to give companies time to adapt processes, systems and documentation. However, this also means that certain parts already apply today.
| Time | Main points |
| August 2024 | The AI Act came into force and the first overall obligations had to be implemented. |
| February 2025 | Ban on AI with unacceptable risk. Requirements for General AI Model Providers. |
| August 2025 | Rules for aftermarket surveillance will apply. |
| February 2026 | Special requirements for selected large-scale information systems. |
| August 2026 | High-risk AI is regulated in practice. Requirements for documentation, quality and control. |
| Before 2030 | Full implementation is expected. Possible tightenings and new interpretations. |
The EU's risk-based classification of AI
The AI Act is based on a risk-based approach, which creates differentiated regulation depending on how much impact the AI system has on people and society. Classification is central because it determines which requirements apply to documentation, responsibility and operation.
1. Unacceptable risk (prohibited – no exceptions)
This category covers AI systems that are considered harmful or unethical. They involve significant risks to citizens' freedoms and trust in society's institutions. Examples include:
- Social scoring, where citizens are ranked hierarchically according to behavior or data.
- Manipulation of human behavior, e.g. political influence through AI-generated content.
- Real-time biometric identification, which can lead to monitoring of the population.
- Emotion recognition used in work contexts or teaching.
These systems are completely banned across EU member states.
2. High risk (strictly regulated)
High-risk systems are used in situations where AI can have a significant impact on citizens' lives, safety or finances. Therefore, high demands are placed on documentation, quality, transparency and human control. Areas include, among other things:
- HR and recruitment, where AI assesses or wears out candidates.
- Finance and credit rating, where decisions affect financial access.
- The health sector, where AI can support diagnoses.
- Critical infrastructure, e.g. energy supply, water, transport and emergency preparedness.
- Social benefits, e.g. assessment of citizens' eligibility.
- Education, including automated grading.
For these systems, companies must, among other things, document data bases, risk analysis, operating procedures and human oversight.
3. Limited risk (primarily transparency requirements)
This category includes AI systems that do not have a direct or significant impact on citizens' rights, but which still need to be used transparently. Typical examples are:
- Chatbots, that interact with customers or employees.
- Deepfakes in marketing or communication, where the user must be informed.
- Generative AI, that produces text, images or sound.
Here, the central requirement is that users are clearly informed that the content is AI-generated.
AI Act as a strategic tool
The AI Act should be seen as an occasion to professionalize the company's approach to AI. The regulation provides companies with a structured framework for:
- Responsible development
- Risk management
- Documentation
- Transparency
- Data quality
- Governance
Companies that implement these principles early will be stronger in relation to customers, business partners and authorities, and will experience a much more mature, secure and scalable AI application.