In the second part of our blog series, Compliance with the AI ActThe EU Artificial Intelligence Act was approved by the EU Council on 21 March 2024. A world-first comprehensive AI law, intended to harmonise rules for the development, deployment, and use of artificial intelligence systems across the EU. Part 2: What is ‘high-risk’ activity? we explore the AI Act’s risk-based approach to the classification of AI systems. What applications are prohibited, what is ‘high-risk’ activity, and what systems are exempt?
Our Compliance with the AI Act blog series explores what you need to know about the legal obligations of deploying certain artificial intelligenceThe use of computer systems to perform tasks normally requiring human intelligence, such as decision-making, speech recognition, translation etc. (AI) technologies under the EU’s landmark AI Act.
For details of the AI Act’s timeline and deadlines for the phased implementation schedule, see Part 1 of our blog series – Compliance with the AI Act Part 1: Timeline and important deadlines
The AI Act takes a risk-based approach to the classification of AI systems. It aims to balance innovation with regulation to prevent harm to health, safety, and fundamental human rights. By assessing risk, the legislation recognises that not all AI systems pose the same level of threat and that varying levels of control and oversight are required.
AI systems are categorised into different risk levels based on their potential impact, with the burden of compliance increasing proportionate to the risk.
These are the three main categories:
The AI applications falling into this category are banned entirely due to the unacceptable potential for negative consequences.
These systems have a significant impact on people’s safety, wellbeing and rights, therefore are allowed, but subject to stricter requirements.
These systems pose minimal dangers, therefore have fewer compliance obligations.
The prohibitions on unacceptable risk AI systems will come into force 6 months after the AI Act is published in the Official Journal of the EU (see the timeline of the phased implementation schedule here).
The European CommissionOne of the core institutions of the European Union, responsible for lawmaking, policymaking and monitoring compliance with EU law. will regularly review the list of prohibited AI applications, with the first review scheduled 12 months after the AI Act enters into force.
The table below details the types of AI practices that fall under the prohibited category. These are the techniques and approaches with unacceptable risks to health and safety or fundamental human rights:
Most of the AI Act addresses the regulation of high-risk AI systems and these are explained as three distinct categories:
Let’s look into these high-risk categories in a little more detail:
This refers to AI systems that are not a component or feature of a larger product, but rather the product in its entirety. Many of these types of products are already regulated by certain EU harmonisation laws. Examples include medical devices, heavy industrial machinery, cars, and toys. These are listed in Annex I of the AI Act.
If you develop or deploy AI systems in a sector with tightly managed safety legislation, there is a high probability the system will be covered here, and you should check the context of the Annex in full.
As these products are already subject to strict safety regulations, they are automatically considered a high-risk category under the AI Act.
This means where an AI system isn’t a standalone product but performs safety-related functions within a product. For example, where an AI system is used for monitoring, controlling, or managing safety features.
Many of these systems are related to products listed in Annex I of the AI Act, such as industrial machinery, lifts, medical devices, motor vehicles etc.
There are certain AI systems not listed in Annex I that are also considered high risk.
This defined list includes systems that would significantly impact people’s opportunities and potentially cause systemic bias against certain groups.
These systems fall into 8 broad areas:
Certain biometric processing is entirely prohibited, as detailed above, but all other biometric processing is classified as high risk (with the exception of ID verification of an individual for cybersecurity purposes e.g. Windows Hello)
AI systems used as safety components in managing critical digital infrastructure (similar to the list in Annex I) and AI systems used in the supply of water, gas, or electricity
Any AI system determining admissions or evaluating learning outcomes are high risk due to the potential impact on lives e.g. the risk of perpetuating historic discrimination of women and ethnic minorities
Any AI system used for recruitment, job application analysis, and candidate evaluation are considered high risk. Also, decision-making AI tools used for performance monitoring, work relationships, or termination of employment are high risk
Systems determining access to essential services such as public benefits like unemployment, disability and healthcare, or private benefits such as credit scoring systems
Certain tasks are considered high risk, including using lie detectors or similar biometric tools used for testimony assessment, and systems used to assess the likelihood of an individual reoffending
Systems used to assess the security risk of migrants entering the EU, or to processA series of actions or steps taken in order to achieve a particular end. and evaluate asylum claims. AI systems used to verify ID documents are exempt from this
This includes AI systems used in legal research or interpreting the law, such as legal databases used by lawyers and judges. Also, systems that could influence voting, like those used to target political ads
The AI Act exempts certain AI systems that would otherwise be considered high risk or prohibited.
Prohibited system exemptions are notably for research and national security.
High-risk system exemptions mainly fall under these criteria:
High-risk AI systems require thorough risk and security assessments and may need EU registration and third-party evaluation. There are also substantial transparency obligations, and users must be clearly informed about how the AI system is deployed and how it functions. Organisations should develop and maintain compliance frameworks to ensure adherence to the AI Act’s requirements. This includes regular audits and documentation to demonstrate compliance.
Compliance with the AI Act Part 3: Who does the AI Act apply to and what are your obligations?
Part 3 of our blog series covers the obligations of the AI Act in more detail, including who the AI Act applies to and what is required:
If you need advice or support on the necessary steps for rolling out your AI system safely and in line with the EU’s AI Act requirements, please contact us for help from our specialist DPO team.
______________________________________________________________________________________________________________________________
In case you missed it…
______________________________________________________________________________________________________________________________
For more news and insights about data protection follow The DPO Centre on LinkedIn
Fill in your details below and we’ll get back to you as soon as possible