On 1 August 2024, the European Artificial Intelligence Act (AI Act) was officially enacted – a pivotal moment in the regulation of AI technologies. Part 3 of our blog series explores who the EU AI ActThe Artificial Intelligence Act (AI Act) is a regulation of the European Union, that introduces a common regulatory and legal framework for artificial intelligence technologies. affects, both within and beyond the European Economic Area (EEA), and outlines the key compliance obligations.
The AI Act will come into full effect from 2 August 2026, 24 months after its official publication, although certain provisions will come into force earlier. For example, prohibitions on unacceptable risk AI systems will apply from 2 February 2025.
You can read Part 1 and 2 of our blog series for a detailed overview of the AI Act timeline, and the risk-based classification of AI systems:
Compliance with the AI Act Part 1: Timeline and important deadlines
Compliance with the AI Act Part 2: What is ‘high-risk’ activity?
Let’s now delve into the specifics of who must comply with the AI Act and the key requirements organisations may need to address to ensure compliance.
Similar to the General Data Protection RegulationRegulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). (GDPR), the AI Act has extra-territorial reach. This makes it a significant law with global implications and means that its provisions apply to any organisation marketing, deploying, or using an AI system in the EU, even if the system is developed or operated outside the EU.
For example, if an AI system hosted in the US generates data or decisions that impact individuals or businesses within any of the 27 EU Member States, that system must comply with the AI Act.
The aim of the extra-territorial scope is to ensure the fundamental rights of EU residents are respected, regardless of international boundaries. This approach seeks to promote a consistent standard of ethical AI practices, encouraging all organisations to uphold high standards of accountabilityPerhaps the most important GDPR principle, which requires controllers to take responsibility for complying with the GDPR and, document their compliance. and transparency.
Compliance obligations for organisations are determined by two main factors:
In Part 2 of our blog series, we covered the risk classification of AI systems. Now, let’s look at the different role categories within the AI supply chain as specified by the AI Act, and explore the specific obligations.
Under the AI Act, organisations fall into one of six distinct roles, each with its own set of obligations. These roles are Provider, Deployer, Distributer, Importer, Authorised Representative, and Product Manufacturer. Here’s an overview of each role:
This is an individual or organisation that designs and develops an AI system and makes it available for use in the EU. Providers are responsible for ensuring the system meets the necessary requirements of the AI Act (the compliance obligations of Providers are detailed below).
Deployers are individuals or organisations that use an AI system developed by a Provider. The responsibilities of Deployers under the AI Act are minimal if the AI system is used without any changes. However, if a Deployer modifies the system significantly or uses it under their own name or trademark, they then take on the Provider’s responsibilities. This means they must then ensure the AI system meets the relevant regulations and standards, just as the original Provider would.
A Distributer is an individual or organisation in the supply chain, other than a Provider or Importer, who makes an AI system available on the EU Market.
Importers are any natural or legal individuals based in the EU who bring an AI system into the EU market that carries the name or trademark of a company or individual from outside the EU.
Manufacturers are individuals or organisations that introduce or put into service an AI system on the EU market as an integral part of another product and brands it with their own name or trademark.
An Authorised Representative is an individual or organisation based in the EU who has been formally appointed by a Provider located outside the EU.
The Representative takes on the responsibility of managing and fulfilling the regulatory obligations and documentation required by the AI Act on behalf of the Provider. This is similar to the GDPR Representative role, although the documents that must be maintained are more detailed and extensive than those required under the GDPR. This is because the AI Act involves complex regulatory requirements for AI systems, covering a broad range of technical, operational, and safety aspects.
The obligations of the AI Act mostly concern the Provider role. This is good news for many organisations that are considered Deployers. For example, a company using ChatGPT to support internal processes typically falls under the Deployer category, meaning their primary responsibility is to ensure they use the AI system in compliance with existing guidelines and data protection obligations, rather than dealing with the more stringent obligations imposed on Providers.
However, Deployers do have certain responsibilities. Also, organisations must carefully assess whether their customisation or modification of an AI system might shift their role to that of a Provider.
AI literacy – Providers and Deployers must ensure that all staff and agents using AI systems have the appropriate level of AI literacy. This requirement is dependent on their roles and the associated risks, similar to the requirement for mandatory data protection training under the GDPR.
Transparency – Providers and Deployers must ensure that any AI system interacting with individuals (termed a ‘natural person’) must meet transparency obligations, such as clearly marking content that is generated or manipulated by AI.
Registration – Providers and Deployers must register the AI system in the EU’s database. The processA series of actions or steps taken in order to achieve a particular end. is similar to data protection registration with a supervisory authorityAn authority established by its member state to supervise the compliance of data protection regulation..
The Provider role is the most heavily regulated for a number of reasons. Providers are responsible for designing, developing, and bringing an AI system to market. They control the creation and operation of these systems. Therefore, they perform a crucial role in ensuring their system meets the required standards for safety, effectiveness, and ethical considerations.
Transparency and accountability are key principles of the AI Act. Providers must ensure their AI systems are easy to understand, and they must clearly communicate the system’s functionalities, limitations, and potential risks. This transparency helps users know exactly what to expect and how to use the AI system safely and effectively.
The AI Act is a landmark piece of legislation, setting the first global standards for the responsible development and deployment of artificial intelligenceThe use of computer systems to perform tasks normally requiring human intelligence, such as decision-making, speech recognition, translation etc. systems.
As with many new regulations, the EU’s AI legislation has sparked concerns and debates among various stakeholders, including industry associations, tech companies, and legal professionals. Their concerns echo the initial criticisms that surrounded the introduction of the General Data Protection Regulation (GDPR). Namely, the potential difficulties for organisations and businesses in interpreting and implementing its provisions.
However, despite its complexity, the AI Act, much like the GDPR, has a structured approach that makes implementation more manageable. There are clear definitions for the six roles in the AI supply chain (Provider, Deployer, Distributer, Importer, Product Manufacturer, and Authorised Representative). Each role comes with specific compliance obligations, with the Provider role having the greatest responsibilities. Deployers also have certain responsibilities. Importantly, if an organisation integrates an AI system into its own or modifies it in ways not originally intended, it could be reclassified as a Provider. This would result in the organisation facing more stringent obligations.
With the AI Act coming into full effect in August 2026, it is essential for organisations to familiarise themselves with the compliance obligations and how they apply.
Compliance with the AI Act could serve as a market differentiator. Organisations adhering to the regulations could leverage their compliance as a unique selling point, attracting clients and partners who value responsible and ethical AI practices.
Look out for the final instalment of our blog series, Part 4: Strategies for achieving compliance with the AI Act. In this blog, we will explore some of the best practices to guide you in meeting compliance requirements.
In the meantime, should your organisation require data protection or AI governance advice, please contact us.
______________________________________________________________________________________________________________________________
In case you missed it…
______________________________________________________________________________________________________________________________
For more news and insights about data protection follow The DPO Centre on LinkedIn
Fill in your details below and we’ll get back to you as soon as possible