close
back
close
back
close

2: AI ACT

When the AI Act came into effect, a common misconception quickly arose in many companies: ”After all, we don't develop AI, so this hardly applies to us.” 

But in practice it is far more nuanced.     

The role of provider or commissioning does not depend on industry, ambition or technical maturity, but on how the company interacts with the AI system it uses.  

And the reality is that many companies are already providers without even realizing it.   

Why the distribution of roles is so crucial 

The AI Act is based, among other things, on the principle that states that the party that affects the functionality of the AI program bears responsibility for the risks and consequences that may arise from its use.    Therefore, the distribution of roles is the first and most important step in any compliance process. It controls:   

  • how extensive the documentation must be  
  • which technical requirements the company must meet  
  • how deep to go down in risk assessments  
  • how heavy a possible audit will be  
  • how much responsibility the company bears if the AI makes mistakes  

 

The provider: The one who shapes the AI 

Many imagine that a “Provider” is only larger companies, someone who writes the algorithms from scratch and offers complex LLMs.  But in the logic of the AI Act, the definition is much broader.  Imagine a company that has purchased an AI system, but subsequently adjusts model parameters, trains the system on its own data or builds an API layer on top so that the output becomes more adapted to their processes.  Although they have never written a line of code, they have now influenced howThe AI works. And thus they are providers.  A company is a provider when it:

  • develops an AI system in whole or in part  
  • gets to develop a system which is marketed and operates under the company's name.     
  • changes functionality, decision logic or training data  
  • integrates AI into a product so that the original purpose of the AI changes  
  • whitelabel, resell or distribute AI  

What matters is the degree of control.  When the company affects the technology, the company is also charged with a technical responsibility.   It is the heaviest role in the AI Act and the most regulated. The provider must therefore be able to document:

  • how the system is developed  
  • what data is used to train the AI  
  • why the model can make the decisions it makes  
  • how risk is reduced  
  • how the system is monitored after commissioning 

 

The commissioner: The company that uses the AI in a practice

Where the provider develops, brands and shapes the technology, the commissioner is the one who uses it in practice. But the responsibility does not disappear for that reason.

Imagine an HR department using an external AI recruitment tool in their hiring process. They do not change the algorithm, but they use the output in a hiring decision-making process, thereby affecting people's lives.  In this case, the HR department is – and thus the company - the commissioner, but their responsibility is still significant. The commissioner is the company that:

  • uses AI developed by others
  • does not make direct technical adjustments
  • implements AI based on provider instructions
  • uses generative AI services without changing core functionality or the purpose of use.

 

The commissioner is therefore responsible forhowThe AIused.  Therefore, the commissioner must:

  • assess the risk of the AI's impact on people and processes.
  • ensure transparency towards users and employees
  • comply with GDPR, for example, by preparing a ”Data Processing Impact Assessment”
  • implement human control by crucial decisions, e.g. in an employment process.
  • monitor output continuously for errors, bias or unintended consequences

 

It is not enough to say: ”We just follow the system's recommendations.”

Three situations where companies can become providers: 

Here are the scenarios where many – without knowing it – move from commissioners to providers.   

When a small adjustment has major consequences

An external AI system is put into use without changes. To make the system “a little better”, the model is adapted by a developer so that it better suits the company's data and workflows. 

Although the change seems limited, it means that the company now has an influence on how the AI works.
This triggers a shift in responsibility: the company is then considered a provider under the AI Act. 

When the purpose changes responsibility  

A generative AI system is initially used as an internal knowledge tool.
Later it becomes part of the company's HR process and is used to screen applicants. 

The moment the AI is involved in decisions that can affect people's opportunities and rights, the level of risk changes significantly.
This means that the company no longer simply uses AI, but assumes provider responsibility under the AI Act. 

When AI becomes “your own”

A company integrates an external AI tool into its product and presents it as part of its own solution. For the customer, the AI now appears as the company's own system. 

The moment the company puts AI on the market under its own name, it also takes over responsibility for the system. This applies regardless of who originally developed the technology. 

 How do you find the real role of the company? 

  • Does your company have an influence on how the AI is developed, branded or what types of technology are used = Provider 
  • Is it a ”plug n’ play”-AI solution or an application of AI without technical or branding changes = Commissioner.     

 

One method that can help investigate one's role, but which never be the only assessment.  This documentation is crucial for supervision and requests from authorities.   The company should always be able to document:   

  • which AI systems are used in the company  
  • where AI is part of processes  
  • how the AI systems used influence people and people-related decisions
  • whether the purpose of the AI's use changes over time  
  • whether technical integrations change output or logic 

 

Conclusion: The role affects AI responsibility 

To that extent, the division of roles between provider and commissioner is conditioned by the company's foundation for the company's entire compliance setup.   

The provider bears the technical responsibility and the commissioner bears the operational responsibility.

And because the role follows the action, companies must continuously document and evaluate whether they are still in the right place in the AI Act's liability model.