In the first of our AI blog mini-series, we mentioned that Data Protection Impact Assessments (DPIAs) and Algorithm Impact Assessments (AIAs) will likely become a data controller’s best friends if they opt to use an AI system, so we thought it only right to dedicate a whole blog to these concepts and how they apply to AI.
Whilst both DPIAs and AIAs revolve around the idea of assessing risk and the impacts of a project, they are two distinct concepts with several key differences. DPIAs are concerned with privacy and data protection risks to the rights and freedoms of individuals, whereas AIAs take a more holistic approach, considering risks and impacts that affect not just the individual but wider society and go beyond privacy and data protection. Whilst AIAs, the clue is in the name, need only to be conducted when assessing AI systems, DPIAs should be carried out prior to a wide array of projects. Finally, at present, AIAs are not mandated by UK law, whereas DPIAs are, in some circumstances, required by the UK GDPRThe UK General Data Protection Regulation. Before leaving the EU, the UK transposed the GDPR into UK law through the Data Protection Act 2018. This became the UK GDPR on 1st January 2021 when the UK formally exited the EU..
In the rest of this blog, we delve a bit deeper into what DPIAs and AIAs are and how to conduct them.
Data Protection Impact Assessments (DPIAs)
As per Article 35 of the UK GDPR, controllers are required to conduct a DPIA if a processing activity is likely to result in a high risk to the rights and freedoms of data subjects. This can cover a wide range of circumstances, but the Information Commissioner’s OfficeThe United Kingdom’s independent ‘supervisory authority’ for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc. (ICOThe United Kingdom’s independent ‘supervisory authority’ for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc.) specifically gives AI as an example when it confirms that the use of innovative technology or the novel application of existing technologies to processA series of actions or steps taken in order to achieve a particular end. personal dataInformation which relates to an identified or identifiable natural person. will likely trigger the need to conduct a DPIA.
A good DPIA will help the controller identify and minimise the privacy and data protection risks presented by a processing activity, whilst also helping it to meet its broader accountabilityPerhaps the most important GDPR principle, which requires controllers to take responsibility for complying with the GDPR and, document their compliance. obligations. DPIAs should be carried out before the processing begins, and must:
DPIAs should consider both the likelihood and the severity of any potential impacts on individuals. A “high risk” could result from either a high probability of “some harm” to a large number of individuals, or lower probability of “serious harm” to a few individuals. Once these risks have been identified, the controller must seek to find ways of mitigating or, where possible, eliminating them. Where there is a high risk that is impossible to mitigate, controllers must consult with the ICO, who will determine whether the risks are acceptable and therefore whether the processing can go ahead.
When it comes to AI, DPIAs should make clear how the AI system is going to process personal data and for what purposes. It will need to cover:
It is important to remember that a DPIA must be completed before the processing begins, at the earliest stage of the AI lifecycle, in order to best serve its purpose. The ICO has also stressed in its toolkit that a DPIA will still have to be conducted even if the data controller bought the AI system from a third party supplier, rather than created it for its specific purposes.
Algorithm Impact Assessments (AIAs)
AIAs very much take inspiration from other types of assessments conducted across other industries, including DPIAs. They involve the evaluation of algorithms and AI systems to assess and address any potential negative impacts that could result from the system. The aim of an AIA is to widen the lens from algorithms as technology in pure isolation, to algorithms as a system that is embedded in human life and communities. It has been suggested that those conducting an AIA should also consider consulting civil society, as well as those who will use the AI system, as a form of oversight. Moreover, as a matter of transparency, many have suggested that AIAs should be made available to users. This possibly reflects the fact that how AI systems work is often not immediately obvious to those whose personal data they process.
Although the UK and the EU do not yet require AI data controllers to conduct AIAs, some jurisdictions do. Slovenia’s national law requires an AIA for all Automated Decision Making (ADM) that falls under Article 22(1) of the GDPR. Similarly, Canada requires organisations in the public sector that use ADM to conduct an AIA, and has also developed a self-assessment tool that can be used when planning an AIA which you can find here. Finally, the US has suggested that corporations that use “automated decision systems” should submit an impact assessment that looks at the accuracyIn data protection terms, the concept of ensuring data is not incorrect or misleading., fairness, bias, discrimination, privacy, and security of the system and report this to the Federal Trade Commission. It also passed the Algorithmic Accountability Act 2019 which requires companies above a certain size to conduct an AIA and provide an explanation about the AI system to individuals.
AIAs are promoted as having several important advantages: improving organisational behaviour, promoting information sharing, and encouraging organisations to consider the effects of their AI system on the individual and the wider public.
Although AIAs are not required by UK law, we recommend that one should be conducted as a matter of best practice – especially if your AI system is processing personal data on a large scale or using special category or sensitive data. In addition, the ICO has suggested that they can work in tandem with DPIAs to help better assess the wider impacts of an AI system.
DPIAs and AIAs in AI development: complying with the GDPR
Both of these assessments should be considered essential tools for AI developers and AI-data controllers, even if only one is mandated by law. They are fundamental in ensuring that the rights of individuals are protected and help to ensure that AI systems are compliant with the wider GDPR.
In the UK government’s newly published National AI Strategy document, it has suggested that the UK will become a data-driven economy and a hub for AI innovations. It is clear from these ambitions that the use and development of AI is going to grow significantly in the UK, bringing with it a whole host of data protection considerations. The importance of ensuring that this innovation does not come at the cost of individuals’ privacy, data protection and other human rights is huge. Therefore, impact assessments such as DPIAs and AIAs are only likely to become more invaluable.
If you are looking for support in conducting these assessments or any other Data Protection Services, please contact us using the form below.
Fill in your details below and we’ll get back to you as soon as possible