In our first AI blog, we briefly discussed the right of data subjects to be informed of how their personal dataInformation which relates to an identified or identifiable natural person. is being processed and for what purposes. The UK Information Commissioner’s OfficeThe United Kingdom’s independent ‘supervisory authority’ for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc. (ICOThe United Kingdom’s independent supervisory authority for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc.) has stressed the importance of explaining to data subjects how and why their personal data will be used in a clear way that is easy for them to understand. In the UNESCO paper on the Ethics of Artificial Intelligence, explainability is referred to as “making intelligible and providing insight into the outcomes of AI systems”, referring also to the “understandability of the input, output and behaviour of each algorithmic building block and how it contributes to the outcome of the system.”
In this blog we discuss the importance of the right to be informed, and how to respect this right when deploying AI systems.
Why should there be an explanation?
The right to be informed (or “explainability”) is very much linked to the EU and UK GDPRThe UK General Data Protection Regulation. Before leaving the EU, the UK transposed the GDPR into UK law through the Data Protection Act 2018. This became the UK GDPR on 1st January 2021 when the UK formally exited the EU. principles of accountabilityPerhaps the most important GDPR principle, which requires controllers to take responsibility for complying with the GDPR and, document their compliance. and transparency. Both regulations make it clear that all data controllersEntities (such as an organisation) which determine the purposes and means of the processing of personal data. carrying out any personal data processing activity should be transparent with those whose personal data they processA series of actions or steps taken in order to achieve a particular end.. Controllers of AI technology that process personal data are no different. This includes the transparency obligations under Articles 12 to 14. Transparency is an overarching obligation under data protection laws, applying to the following areas:
Providing data subjects with an explanation is important as individuals have the right to be informed of how their personal data is being processed, particularly when there is the existence of solely automated decision-making that produces legal or similarly significant effects. This means that individuals need to be provided with a meaningful explanation of the logic behind the AI system, as well as the possible consequences of the processing. The right to an explanation helps to uphold other data subject rightsUnder UK and EU data protection regulation, data subjects have a number of rights available to them, including the right to be informed, access, rectification, erasure, restrict processing, data portability, to object and further rights in relation to automated decision making and profiling. including Article 22 and the right to meaningful review.
Both the EU and UK GDPR clearly stress the importance of individuals knowing and understanding what is happening to their data. This means that it is vital that organisations that deploy AI technology have detailed documentation in place that explains how and why their data is being processed.
When it comes to explaining solely automated decision-making to data subjects, there are specific requirements for what information needs to be included in the explanation to data subjects if the solely automated decision has the potential to have a legal or similarly significant effect on the individual.
What goes into an explanation?
The ICO has suggested that when writing an explanation, data controllers should provide information based on two sub-categories of information:
Processed-based explanation – gives information on the governance of the AI systems across the design and deployment phases of the project. This part of an explanation is about demonstrating that you have followed best practice and implemented good governance processes across the project.
Outcome-based explanation – gives users clarity on the results of a specific decision. This involves explaining the reasoning behind an outcome generated by an algorithm in a way that is easy for users to understand.
There are many different ways to explain decision making carried out by AI, and the ICO identifies six different types of explanation:
The ICO has also provided data controllers with the following tips:
What does this mean in practice?
The ICO makes clear that the type of explanation that is most appropriate will depend on the context. It is therefore important for AI data controllers to remember the domain, the audience, and the purpose of their AI system. For example, the explanatory information provided for AI systems that are making decisions about children or other vulnerable people will need to have much more simple language and explanations compared to the explanatory information provided for an AI system used to make decisions affecting computer scientists who already have some technical understanding of how the system works. The main question to ask when considering which to choose is: what information is the data subjectAn individual who can be identified or is identifiable from data. likely to want to know or find most useful?
Conclusion
AI data controllers need to create explanations in ways that are easy to read and understand, keeping in mind the affected data subjects’ level of knowledge of the AI system. They must also ensure that these explanations are easily accessible. Having a solid explanation that considers all the above factors is key to ensuring that AI systems respect the principles of lawfulness, fairness and transparencyThe first principle of the GDPR, requiring organisations to document a lawful basis for collecting and using personal data, to avoid processing personal data in a way that is unduly detrimental, unexpected or misleading to data subjects, and to be clear and honest about how they use personal data..
For more information on the right to an explanation, the ICO and The Alan Turning Institute have created the following guidance documents entitled: “Explaining decisions made by AI” part one, part two and part three.
If you have any questions on the content of this blog, or how The DPO Centre can assist your organisation, please complete the form below.
Fill in your details below and we’ll get back to you as soon as possible