When it comes to decision making, AI can assist a human in making decisions, or it can be used to make decisions completely independently without human intervention. These are often described as human-in-the-loop and human-out-the-loop decision making, with the latter type often seen in the more complex ‘black box’ systems. Human-out-the-loop decision making is also referred to as solely automated decision making. Where these solely automated decisions relate to individuals, Article 22 GDPR protects individuals where these decisions could present an adverse legal or similarly significant effect on them. Crucially, the GDPR requires that where such decisions are made, there must be an element of human review within the decision, and it is a requirement that this review be meaningful.
In this blog, we dive deeper into what this means for AI data controllers and how they can ensure compliance with the law.
The importance of Article 22
Article 22 GDPR states that data subjects have the right not to be subject to decisions based solely on automated processing, which produces legal or similarly significant effects. Recital 71 confirms that this also applies to profiling which involves evaluating the personal aspects relating to a natural person “in particular to analyse or predict aspects concerning the data subject’s performance at work, economic situation, health, personal preferences or interests, reliability or behaviour, location or movements” where this produces legal or similarly significant effects.
Article 22 is important as automated decision making can produce such effects in a variety of areas of life including the monitoring of employees, health-related decisions, approving or rejecting loans or job applications, and law enforcement. Article 22 ensures that humans have a role in the decision-making A series of actions or steps taken in order to achieve a particular end. so that important decisions such as these are not left solely to technology alone. This is vital not only to mitigate against any inherent discrimination or unfairness present within an algorithm having an adverse effect on data subjects, but also to allow for consideration of human and situational factors that could render a decision unfair or incorrect in certain contexts.
A prime example of the above dates back to 2020, when taxi company Uber were taken to court for using “robo-firing” algorithms to deactivate drivers’ apps and therefore preventing them from working when it detected fraudulent activity. In this case it was found to be lawful, but only because Uber were able to convince the Court that the algorithm was merely a tool to aid human teams in making their decisions relating to fraud – i.e., meaningful human review was at play.
Meaningful human review
What meaningful human review actually consists of is a topic that is very much still up for debate. What is clear, is that the ‘human review’ element to a decision must be more than trivial for it to fall outside of the scope of Article 22. In other words, a human cannot merely ‘rubber stamp’ an AI-powered decision for it to be considered not solely automated.
There has, unfortunately for AI data controllers, been little guidance on what this means in practice, but the little guidance there is out there (courtesy of the The United Kingdom’s independent ‘supervisory authority’ for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc. (The United Kingdom’s independent supervisory authority for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc.) and the European Data Protection Board (EDPB)) states that reviewers must not by default apply an automated decision, instead they should weigh up the recommended decision taking into account all the information that is available to them and other external factors that should be considered. Furthermore, reviewers must have the “authority and competence” to be able to overturn or go against an automated decision.
In order to be able to demonstrate that meaningful human review is taking place, it is important to ensure that the review is documented appropriately, and reasoning recorded for both instances in which a reviewer accepts an automated recommendation, as well as for when they reject one. In addition, it recommends that anyone tasked with reviewing automated decisions should be trained appropriately to ensure that they understand how the AI system works; to recognise when the system is likely to produce misleading or incorrect recommendations; to understand the external or other factors that should influence decision making that an AI system would not consider; and how to document their decision making to be transparent and accountable.
When deciding how to implement meaningful human review into your processes, it is recommended that AI data controllers fully assess the risks posed by the automated decision making process, not least because this will help to determine whether the system is producing legal or similarly significant effects. Usually this will be in the form of a A formal documented assessment which allows decision-makers to identify, manage and mitigate any data protection risks associated with a project. (DPIA) or an Algorithmic Impact Assessment (AIA). See our blog on these subjects for further information.
Impact of the DCMS consultation
Whilst Article 22 When transferring personal data to a third country, organisations must put in place appropriate safeguards to ensure the protection of personal data. Organisations should ensure that data subjects' rights will be respected and that the data subject has access to redress if they don't, and that the GDPR principles will be adhered to whilst the personal data is in the... individuals from having solely automated decisions with legal or similarly significant effects being made about them, it also does provide some exceptions to this very important rule. These exceptions allow for such decisions to be made without meaningful human review where the decision is necessary for the performance of a contract with the individual; where member state (or UK) law authorises such decisions; or where the An individual who can be identified or is identifiable from data. explicitly consents to it. Even in these instances, however, individuals have always been granted the right to request human intervention should they wish to, or to contest the decision’s outcome. And, following the massive backlash from civil society, rights campaigners and the general public regarding the Department for Digital, Culture, Media and Sport’s (DCMS) proposal to possibly remove these rights, it appears that they are, thankfully, here to stay.
So, in an effort to prevent complete uproar in the data protection community, the DCMS has now stated that it will look to investigate the “efficacy of safeguards” in relation to automated decision-making about data subjects, instead of removing them. What they will conclude, however, is yet to be determined.
Automated decision making systems are having an ever-increasing presence in our lives; in fact, it is more than likely that you have already come into contact with one today. Organisations need to ensure that they are using such a powerful tool appropriately and not causing harm to individuals and groups of people. These tools can have significant adverse effects on an individual if care is not taken. It is therefore crucial that data subjects are not subject to potentially life-changing decisions being made about them at the hands of AI, without any human intervention, even if this is simply the right to request human review of a decision.
As we have highlighted in previous blogs, AI regulation is a constantly changing landscape, as regulation seeks to catch up with the pace of technological development in this area. In September 2021, the UK government launched a ten-year plan to make the UK a global AI superpower and it is likely that this will involve regulatory changes, including possible changes to Article 22 GDPR. For now, however, meaningful human review remains a central requirement for any solely automated decision making process, a requirement that is vital for AI data controllers to be aware of and to abide by.
And with that, our AI blog mini-series is complete! Click here to read the other blogs in our series on the five considerations for using AI; DPIAs and AIAs; the right to be informed; and discrimination.[/vc_column_text][/vc_column][/vc_row]
Fill in your details below and we’ll get back to you as soon as possible