A survey conducted by the Department for Digital, Culture, Media and Sport (DCMS) has found that many businesses now see AI as an ‘emerging technology’. Of the organisations that responded, 27% stated that they had released AI technology or were at the advanced stages of development, whilst 38% said they were planning or piloting the technology.
Clearly, AI is becoming a significant part of business and how businesses operate. From a data protection standpoint, this means an increased use of personal dataInformation which relates to an identified or identifiable natural person., as AI systems rely on data sets to be able to work effectively and achieve their desired purpose. Although the GDPR does not mention AI by name (as it is technology neutral), like any technology that uses personal data, companies using AI systems that processA series of actions or steps taken in order to achieve a particular end. personal data will have to follow the rules laid down in data protection legislation. To aid them in doing so, the UK Information Commissioner’s OfficeThe United Kingdom’s independent ‘supervisory authority’ for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc. (ICOThe United Kingdom’s independent supervisory authority for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc.) has produced an AI and Data Protection Risk Toolkit, as well as guidance on explaining decisions made with AI, co-written by The Alan Turing Institute.
Having reviewed these documents, we present to you the five key considerations that companies need to think about before launching an AI system, in order to ensure that it is GDPR compliant.
A Data Protection Impact Assessment (DPIA) is a process that helps individuals identify and minimise the data protection risks of a project. DPIAs must be conducted whenever a project’s data processing is likely to result in a high risk to the rights and freedoms of data subjects, as per Article 35 of the GDPR.
The ICO has, in other guidance, stated that use of AI to process personal data will trigger the requirement to conduct a DPIA because algorithms are built upon advanced data sets that rely heavily on personal data. A DPIA should be conducted at the start of the project prior to any personal data being processed and should consider the risks posed at all stages of the AI system’s development. In addition to a DPIA, an Algorithm Impact AssessmentAn Algorithm Impact Assessment (AIA) is a method of assessing the potential impacts of an automated decision system and the appropriateness of its use, as well as checking for bias within the algorithm. AIAs draw on impact assessment frameworks in the areas of environmental protection, human rights, data protection and privacy. (AIA) will also have to be completed. These two assessments will likely become the AI data controller’s best friend.
An AIA goes much further than a DPIA, being used at every stage of the development process to assess the potential impacts of an automated decision system. A key part of an AIA is to check for any bias in the algorithm that could lead to discrimination and to see if the data being used respects GDPR. Following on from this assessment, the data controller may have to change the datasets used, or the algorithm itself to combat the issues discovered.
Both the DPIA and AIA will inevitably be critical in the development and operating of AI systems. In addition, it could be argued that they will become the lifeline to ensure GDPR compliance.
Click here to read the full blog post.
Both the UK and EU GDPRRegulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons regarding the processing of personal data and on the free movement of such data (General Data Protection Regulation). make it crystal clear that data subjects have the right to be informed about how their personal data is being processed and for what purposes. Whether we read these explanations and understand them is a different matter; however, in the ICO’s toolkit and guidance paper it has stressed the importance of explaining to data subjects exactly how and why their personal data will be used in a clear way that is easy for the individual to understand. The ICO has highlighted that the explanation should be informed by the principles of ‘accountability’ and ‘transparency’.
As of yet, there are no ‘cut and paste’ explanations for AI systems that can simply be slotted into a privacy policy verbatim. This is, however, probably for the best since each system is likely to be different but, there are several key things that AI data controllers will need to include in their privacy policy to adequately respect data subjects’ right to be informed:
Data controllers will also have to build their explanation around the specific sector they are operating in, who exactly will be using the AI system and the domain that it will operate in.
The list above seems no different to what should be included in any privacy policy about any type of personal data processing, and it isn’t. We appreciate that explaining how AI/machine learning algorithms work in layman’s terms is extremely challenging. Despite this, it is still vital that explanations are made easily accessible and readable; without such, data controllers will be in breach of data subjects’ right to be informed.
Click here to read the full blog post.
Discrimination has already been mentioned a few times in this post, but it is something that AI developers, especially, will have to bear in mind. Discrimination can happen in one of two ways: intentionally, where it is written into the algorithm – for example, a rule that automatically rejects loan applications from women; or unintentionally, where discrimination is learnt through biased datasets – for example, where the original dataset only contained loan applications from men so is less likely to accept loan applications from women.
The GDPR aims to respect individuals’ ‘fundamental rights and freedoms’ and reflects the Equalities Act 2010. Using old datasets that reflect past discrimination or using an imbalanced dataset to “train” your AI system will create discriminatory outcomes. This is why conducting thorough DPIAs and AIAs are so crucial. If an AIA is conducted, it can pick up and flag ‘questionable’ datasets, indicating where adjustments are required.
Click here to read the full blog.
Article 22 of the GDPR protects individuals from having decisions made about them using solely automated means, where that decision produces an adverse legal or similarly significant effect on the individual. Automated decision making (ADM) is where a decision is made by automated means, without any human involvement (or very little). The decisions can be based on factual data or on digitally created profiles.
AI systems have different degrees of decision-making capabilities. Not all are fully autonomous, many merely assist humans in their ability to make decisions. If your AI system is in the latter category, Article 22 will not apply as decisions are not being made by automated means. If, however, your AI system is fully autonomous, you must then consider whether the decisions being made are having adverse legal or similarly significant effects on the individuals in question. If they are not, Article 22 will not apply but, if they are, you must ensure that the decisions being made by your AI system are subject to manual scrutiny by a human.
With the rise of AI and machine learning, the use of automated decision-making, whether that be fully or assisted, is only going to increase and so too the importance of Article 22 GDPR.
Click here to read the full blog.
Following on from the previous point, if you are using AI to conduct automated decision making, you need to consider whether you require, and how to implement, some kind of meaningful human review. The ICO has stated that the human review has to be meaningful and that simply getting a human to “rubber stamp” the AI system’s decision does not mean a process will fall out the scope of Article 22. In addition, this review should be carried out by someone who has the authority to change the decision if deemed appropriate.
Providing individuals with clear information about how the decision was made, and how they can request a review of the decision, is key to enabling them to be able to contest an automated decision, as is their right under Article 22(3) GDPR.
Click here to read the full blog.
Conclusion
AI is increasingly becoming an ever more emerging technology, and its reach is only going to get wider. With its use, however, comes some serious considerations that must be kept in mind. Data controllers, data scientists, and AI programmers and developers must be aware of the key aims and principles of the GDPR, how they relate to the wider scope of data subject rightsUnder UK and EU data protection regulation, data subjects have a number of rights available to them, including the right to be informed, access, rectification, erasure, restrict processing, data portability, to object and further rights in relation to automated decision making and profiling., and how all of this impacts the use of AI. The five key considerations discussed above are a great starting point.
It is also important to note that as an emerging technology, regulation of this area is like to grow and develop over time. With the UK now having full control of its own data protection legislation and clearly looking to facilitate technological innovation, this is certainly an area we will all have to keep a watch on.
This is the first blog of our ‘AI and GDPR’ mini-series, keep your eyes peeled for more!
Fill in your details below and we’ll get back to you as soon as possible