Our Compliance with the AI ActThe EU Artificial Intelligence Act was approved by the EU Council on 21 March 2024. A world-first comprehensive AI law, intended to harmonise rules for the development, deployment, and use of artificial intelligence systems across the EU. blog series explores what you need to know about the upcoming legal obligations of deploying certain artificial intelligenceThe use of computer systems to perform tasks normally requiring human intelligence, such as decision-making, speech recognition, translation etc. (AI) technologies under the EU’s landmark AI Act.
Whether you’re an organisation using AI-driven chatbots for customer enquiries, developing predictive algorithms for credit risk, or image recognition software for security purposes, the AI Act may impact your data handling practices.
Understanding the requirements of the AI Act and what will apply to your organisation is crucial for compliance.
In our four-part blog series, we cover:
The AI Act was given final approval by the European Council on 21 May 2024. It has a phased implementation schedule over two years, designed to give organisations time to make the necessary changes for compliance.
Understanding the timeline and deadlines of the new law is essential for businesses and developers to stay ahead of the curve. In part 1 of our blog series, we cover the key milestones you need to know for the upcoming provisions.
The AI Act will apply to public and private organisations operating within the EU that develop, deploy, or use AI systems within the EU’s single market. This includes companies, institutions, government bodies, research organisations and any other organisations involved in AI-related activities.
David Smith, DPO and AI Sector Lead explains:
‘In many cases the AI Act and the GDPR will complement each other. The AI Act is essentially a product safety legislation designed to ensure the responsible and non-harmful deployment of AI systems. The GDPR is a principles-based law, protecting fundamental human privacy rights.’
The AI Act’s finalised text will be published in the Official Journal of the European Union, officially entering into force 20 days after publication – 1 August 2024. The new law will then apply two years later, 1 August 2026, with some additional and earlier deadlines for specific provisions.
The EU Commission has also established the EU AI Office. From 16 June 2024, the AI Office will support the implementation of the AI Act across all Member States.
Here is a timeline of the key dates and deadlines of the phased roll-out:
(1 February 2025) Banned AI practices are those deemed to pose unacceptable risks to health and safety or fundamental human rights. We will cover prohibited AI applications in more detail in our next blog. With the deadline for compliance on unacceptable risk AI systems approaching, organisations should evaluate their risk exposure in this area as soon as possible.
(1 May 2025) The AI Office will finalise the codes of conduct to cover the obligations for developers and deployers of AI systems. These codes will provide voluntary guidelines for responsible AI development and use.
(1 August 2025) The rules for providers of General Purpose AI (GPAI) will come into effect and organisations must align their practices with these new rules. GPAI refers to advanced AI systems capable of performing a wide range of tasks. These include high-compute models where training contains more than 10^25 FLOPS, such as ChatGPT. Additionally, the first European CommissionOne of the core institutions of the European Union, responsible for lawmaking, policymaking and monitoring compliance with EU law. annual review of the list of prohibited AI applications will also take place 12 months after the AI Act enters into force.
(1 February 2026) The European Commission will issue implementing acts for high-risk AI providers. This means organisations using high-risk AI systems must follow a standard template to monitor the AI systems after deployment. The monitoring plan will help to ensure that any issues or risks are promptly identified and addressed.
(1 August 2026) The remainder of the AI Act will apply, including regulations on high-risk AI systems listed in Annex III* of the AI Act. These systems include those related to biometrics and include technologies such as fingerprint recognition, facial recognition, iris scanning and voice authentication. We will cover high-risk AI systems in more detail in our next blog.
(1 August 2027) Regulations for high-risk AI systems stipulated in Annex I** become effective.
*EU Artificial Intelligence Act Annex III
**EU Artificial Intelligence Act Annex I
***information updated 22 July 2024
Part 2 in our blog series covers everything you need to know about prohibited AI applications and what is categorised as a high-risk activity.
In the meantime, should you require any data protection advice, our team of expert DPOs can help. We offer a wide range of outsourced privacy services, including AI governance support.
______________________________________________________________________________________________________________________________
In case you missed it…
______________________________________________________________________________________________________________________________
For more news and insights about data protection follow The DPO Centre on LinkedIn
Fill in your details below and we’ll get back to you as soon as possible