close
back
close
back
close

3: AI ACT

When companies start using AI, it often happens gradually. First in small, internal contexts and later in more business-critical processes.   At some point the question arises:   

Who is responsible when decisions are made in practice by a system using AI? 

The short answer is: The company is responsible regardless of whether the decision is made directly by a human or with the support of AI.  The AI Act does not regulate artificial intelligence as a technology, but your company's choice to use AI in contexts where people, rights and important decisions are affected.   AI Act shifts the focus away from technology itself and to the way companies manage, monitor and document the use of AI.     This is where governance and GDPR play a decisive role.   

AI cannot be left to fend for itself 

AI systems are dynamic. They change when data changes, when usage expands, and when systems integrate deeper into the organization. A system that works correctly and compliantly today can cause problems tomorrow without anyone actively changing the functionality of the system itself.   

Therefore, the AI Act requires your company not only to implement AI, but to have ongoing control over it.   Governance is the structure that makes it possible.    It ensures that your company can:   

  • maintain an overview of where AI is used throughout the company  
  • detect errors and unintended effects of use  
  • react quickly if something goes wrong  
  • explain and document decisions to users and authorities  

 

Without governance, AI is in practice illegal to use and even more difficult to defend in an inspection.   

 

What governance means in practice 

Governance is not about long documents, but about being able to make clear decisions in everyday life.   In practice, AI Act expects the company to be able to answer questions such as:   

  • Who has overall responsibility for the AI system?  
  • Who can change that – and when?  
  • How is the system monitored in operation?  
  • What happens if the AI makes the wrong decision?  

 

The AI Act does not require certain tools or technologies, but the company's ability to explain and document its choices.  Two companies can therefore be compliant in different ways, but no one can abdicate responsibility.   To be able to answer that, most companies need:   

  • a clear ownership for each AI system  
  • fixed processes for changes and new uses  
  • ongoing monitoring of output and behaviour  
  • clear procedures for errors, complaints and deviations  

 

Risk depends on the application 

One of the most important principles of the AI Act is that risk is assessed on the basis of purpose. The same AI tool can be unproblematic in one context and critical/illegal in another.   

A system used for internal research rarely poses a great risk. If, on the other hand, AI is used to assess job applicants, creditworthiness or health information, the requirements change significantly.   

It is therefore not AI technology itself that is high-risk, but the company's decision to use it in certain contexts.   Therefore, companies must always assess:   

  • who AI affects  
  • how important output is  
  • what the consequence is if the system is wrong  
  • whether there is a risk of discrimination  

 

The risk assessment is the foundation for correct governance, documentation and control.   

   

The interaction with GDPR: When AI treats people 

When AI processes personal data or enters into decisions about people, key GDPR requirements are activated.   Two areas in particular are crucial.   

Automatic decisions:   If AI is used for decisions that have a significant impact on a person –, e.g. refusal, sorting out or awarding benefits, the system must not stand alone.   

There must be real human control over such decisions. This means that a person must be able to:   

  • understand the decision-making basis  
  • assess reasonableness  
  • change or reverse the decision  

 

Formal approval of an AI decision is not enough. The control must actually assess the content of the decision.     

For high-risk applications, a personal data law impact analysis (DPIA). The purpose of the analysis is to assess whether the risk from the use of a system or a process for the persons concerned can be reduced to an acceptable level. If this is not possible, the AI solution must not be used in practice.

Thus, the DPIA becomes not only a legal requirement, but a concrete decision-making tool that forces the company to decide whether the purpose is legitimate and proportional, and not just whether the technology works.     

   

Transparency: Knowing when AI is at stake 

The AI Act sets several general requirements for all AI systems and tools. Humans must have no doubts when interacting with AI.   This means that companies must be open about:   

  • using chatbots  
  • AI-generated content  
  • automated case processing  
  • AI's role in decision-making processes  

 

Transparency is not about explaining algorithms in detail, but about making the company's use of AI understandable to those affected by it.   

   

Logging and ongoing monitoring 

When AI is used in practice, the company must be able to explain what happened if something goes wrong.   It requires that traces exist. Logging and monitoring make it possible to document:   

  • when the system was used  
  • what output it gave  
  • about and when people intervened  
  • how errors were handled  

 

Without logging, there is no explanation. And without explanation, there is no accountability – either to users or authorities.   

   

A clear way forward 

It is not about eliminating risk, but about being able to manage, explain and measure it. For most companies, working with the AI Act can be brought together in a few clear steps:   

  • get an overview of where AI is used  
  • clarify purpose and risk level  
  • determine responsibilities and roles  
  • ensure human control where necessary  

document decisions and changes continuously accountability – neither to users nor authorities.      

 

   

AI governance and GDPR are not a leg span for your company's innovation.

They are the prerequisite for AI to be used responsibly and on a larger scale. Companies that get control of governance, risk assessment and documentation will find that the AI Act will not become a barrier, but a framework that creates clarity, quality and trust in the work with artificial intelligence.