Turns out, the rumours are true – the EU is in the processA series of actions or steps taken in order to achieve a particular end. of developing a way to regulate the use of Artificial IntelligenceThe use of computer systems to perform tasks normally requiring human intelligence, such as decision-making, speech recognition, translation etc. (AI). Late last month, the European CommissionOne of the core institutions of the European Union, responsible for lawmaking, policymaking and monitoring compliance with EU law. released a proposal for a regulatory framework that aims to govern when and how AI can be used in the bloc, so as to develop “trustworthy AI”, in the words of Ursula von der Leyen. In this blog post, we give you a quick run down of the key parts of the proposal that have caught our eye.
‘Unacceptable risk’
The EU Commission has been very clear that they recognise the benefits that AI brings to businesses and society more widely, and they have been at pains to emphasise that the proposed AI Regulation is not intended to hinder business growth, nor technological development. As a result, the proposed Regulation overall takes a risk-based approach to managing the use of AI. With that being said, however, the Commission have determined that there are some AI-based practices that simply present an unacceptable level of risk to the rights and freedoms of individuals within the EU, and therefore must be prohibited completely:
Regulation for ‘high risk’ AI
Other than the limited scenarios above, the bulk of the Regulation is aimed at putting in place controls for the use of ‘high-risk’ AI systems. What is categorised as high-risk is not specifically defined, however some examples are given in Annex III:
Key to the proposed Regulation is that it adopts a whole lifecycle approach to AI oversight. Similar to the GDPR’s data protection by design and default requirement, the proposed AI Regulation requires high-risk AI systems to be subject to scrutiny from the beginning to the end of their planning, development, operation, and disposal.
In terms of what the providers, importers, distributors, and users (yes, all have obligations) of high-risk AI systems must do, two words spring to mind: risk management. Conducting risk assessments will be vital throughout the lifecycle of these AI systems, as well as good data governance, documentation and record-keeping, and monitoring and incident reporting. All in all, the proposed rules are pretty onerous.
Hefty fines
A final, and perhaps the most unexpected, announcement was the huge fines that will accompany non-compliance with the Regulation when in force. Like the GDPR, the AI Regulation takes a tiered approach to fines, however, in this case there are three instead of two:
This final tier of fine is likely to make those in a large business using, or thinking of using, an AI system that is likely to be prohibited shudder. For the top organisations, being found to have contravened this prohibition could give rise to a fine in the billions.
What does this mean for the UK?
All eyes will now be on the UK as this new AI Regulation is likely to be the first piece of EU legislation to be enacted post-Brexit, meaning that this will be the first opportunity for the UK to either align itself with, or diverge from, the EU. Both have merit, but if it chooses not to follow in the EU’s footsteps, the UK can set its own agenda, potentially building off of its pre-existing AI auditing framework. As such, the UK could be seen to offer an alternative to the EU’s one size fits all approach to AI regulation. The reality is, however, that many organisations do not exclusively target the UK and therefore could have to comply with the proposed EU AI Regulation too, meaning that aligning more closely with the EU approach could be beneficial, however, only time will tell.
Conclusion
AI is constantly bringing new innovation and development to a range of different industries, from education and healthcare, through to finance and energy. Therefore, the proposed AI Regulation is going to have a massive impact in many sectors. Whilst currently the Regulation is merely at the proposal stage, and so the content is subject to change, it is clear that the EU means business, most notably indicated by the huge fines that have been suggested for non-compliance.
But, with the vast capabilities of AI offering up huge opportunities for many businesses, this level of financial penalty is likely to be a necessary stick with which to beat organisations wanting to push the boundaries of what are acceptable uses of AI.
For now, it is sufficient to say that this announcement from the EU Commission has garnered a massive amount of attention over the last few weeks, and it is likely to attract far more in the coming months as the Regulation continues to develop and begins to shape the AI landscape of the future.
The DPO Centre can provide the necessary support and guidance to aid your compliance by considering the UK’s AI auditing framework and wider Article 25 Privacy-by-Design requirements. Please contact us below for further information and support.
Fill in your details below and we’ll get back to you as soon as possible