Turns out, the rumours are true – the EU is in the A series of actions or steps taken in order to achieve a particular end.... of developing a way to regulate the use of The use of computer systems to perform tasks normally requiring human intelligence, such as decision-making, speech recognition, translation etc.... (AI). Late last month, the One of the core institutions of the European Union, responsible for lawmaking, policymaking and monitoring compliance with EU law.... released a proposal for a regulatory framework that aims to govern when and how AI can be used in the bloc, so as to develop “trustworthy AI”, in the words of Ursula von der Leyen. In this blog post, we give you a quick run down of the key parts of the proposal that have caught our eye.
The EU Commission has been very clear that they recognise the benefits that AI brings to businesses and society more widely, and they have been at pains to emphasise that the proposed AI Regulation is not intended to hinder business growth, nor technological development. As a result, the proposed Regulation overall takes a risk-based approach to managing the use of AI. With that being said, however, the Commission have determined that there are some AI-based practices that simply present an unacceptable level of risk to the rights and freedoms of individuals within the EU, and therefore must be prohibited completely:
- AI systems that distort a person’s behaviour and cause, or are likely to cause, physical or psychological harm by deploying subliminal techniques or by exploiting the vulnerabilities of an individual due to their age, or physical or mental disability
- AI systems used by public authorities or on their behalf to evaluate individuals’ trustworthiness, where this social score could lead to detrimental or unfavourable treatment that is either unrelated to the context in which the data was originally generated or unjustified and disproportionate
- The use of “real-time” remote biometric identification (facial recognition) systems in public spaces for law enforcement purposes (although this prohibition is subject to many exemptions which are then subject to further When transferring personal data to a third country, organisations must put in place appropriate safeguards to ensure the protection of the personal data. They should ensure that data subjects' rights will be respected and that the data subject has access to redress if they are not, and that the GDPR principles will be adhered to whilst the personal data is...)
Regulation for ‘high risk’ AI
Other than the limited scenarios above, the bulk of the Regulation is aimed at putting in place controls for the use of ‘high-risk’ AI systems. What is categorised as high-risk is not specifically defined, however some examples are given in Annex III:
- Biometric identification and categorisation
- Management and operation of critical infrastructure
- Education and vocational training
- Law enforcement
- Migration, asylum and border control
Key to the proposed Regulation is that it adopts a whole lifecycle approach to AI oversight. Similar to the GDPR’s data protection by design and default requirement, the proposed AI Regulation requires high-risk AI systems to be subject to scrutiny from the beginning to the end of their planning, development, operation, and disposal.
In terms of what the providers, importers, distributors, and users (yes, all have obligations) of high-risk AI systems must do, two words spring to mind: risk management. Conducting risk assessments will be vital throughout the lifecycle of these AI systems, as well as good data governance, documentation and record-keeping, and monitoring and incident reporting. All in all, the proposed rules are pretty onerous.
A final, and perhaps the most unexpected, announcement was the huge fines that will accompany non-compliance with the Regulation when in force. Like the GDPR, the AI Regulation takes a tiered approach to fines, however, in this case there are three instead of two:
- Up to 2% of annual global turnover or €10 million, whichever is greater – for supplying incorrect, incomplete, or false information to notified entities
- Up to 4% of annual global turnover or €20 million, whichever is greater – for failing to cooperate with national competent authorities and obligations under the Regulation
- Up to 6% of annual global turnover or €30 million, whichever is greater – for developing, offering for sale, or using a prohibited AI system
This final tier of fine is likely to make those in a large business using, or thinking of using, an AI system that is likely to be prohibited shudder. For the top organisations, being found to have contravened this prohibition could give rise to a fine in the billions.
What does this mean for the UK?
All eyes will now be on the UK as this new AI Regulation is likely to be the first piece of EU legislation to be enacted post-Brexit, meaning that this will be the first opportunity for the UK to either align itself with, or diverge from, the EU. Both have merit, but if it chooses not to follow in the EU’s footsteps, the UK can set its own agenda, potentially building off of its pre-existing AI auditing framework. As such, the UK could be seen to offer an alternative to the EU’s one size fits all approach to AI regulation. The reality is, however, that many organisations do not exclusively target the UK and therefore could have to comply with the proposed EU AI Regulation too, meaning that aligning more closely with the EU approach could be beneficial, however, only time will tell.
AI is constantly bringing new innovation and development to a range of different industries, from education and healthcare, through to finance and energy. Therefore, the proposed AI Regulation is going to have a massive impact in many sectors. Whilst currently the Regulation is merely at the proposal stage, and so the content is subject to change, it is clear that the EU means business, most notably indicated by the huge fines that have been suggested for non-compliance.
But, with the vast capabilities of AI offering up huge opportunities for many businesses, this level of financial penalty is likely to be a necessary stick with which to beat organisations wanting to push the boundaries of what are acceptable uses of AI.
For now, it is sufficient to say that this announcement from the EU Commission has garnered a massive amount of attention over the last few weeks, and it is likely to attract far more in the coming months as the Regulation continues to develop and begins to shape the AI landscape of the future.
The DPO Centre can provide the necessary support and guidance to aid your compliance by considering the UK’s AI auditing framework and wider Article 25 Privacy-by-Design requirements. Please contact us below for further information and support.