In the first of our AI blog mini-series, we mentioned the importance of ensuring that AI systems’ machine learning (ML) algorithms are not subject to intentional, or accidental, discriminatory outcomes or practices. Many people believe that because AI systems rely heavily on objective data, they cannot be discriminatory; however, this is not the case and both AI developers and data controllersEntities (such as an organisation) which determine the purposes and means of the processing of personal data. must keep in mind the potential for their algorithms to discriminate. Discrimination can derive from a number of factors including the data that is used to train AI systems, the way the systems are used, and the way they have been designed.
AI and ML undoubtedly have the potential to change lives for the better, but they can also cause harm to individuals if not properly regulated. Discrimination within these systems can lead to individuals being unjustly denied meaningful employment, access to loans and housing, as well as increased surveillance from law enforcement. As the use of AI and ML is set to grow exponentially and permeate more and more sectors and industries, it’s imperative that AI data controllers ensure that their systems are not unfairly discriminating against groups, whether intentional or not. This is even more important because AI systems are often ‘black boxes’, with it being unclear how or why a system makes a particular decision. Because of this opaqueness, it is harder for people to assess whether they have been discriminated against.
In this blog, we consider the right to non-discrimination in the context of data protection and the use of AI.
How do algorithms discriminate?
As mentioned in our first blog, discrimination resulting from AI systems can occur both intentionally and unintentionally. Intentional discrimination is fairly straightforward, occurring when rules or conditions that will have a discriminatory outcome are written into an algorithm on purpose. E.g., a rule that automatically rejects loan applications submitted by women. However, in most cases of discriminatory AI systems, the developers never set out with that intention. Rather, discrimination was an inadvertent result of the development processA series of actions or steps taken in order to achieve a particular end.. There are two main ways in which this could occur, both relating to the quality of the data used to train an AI system, not the system/algorithm itself:
A final point to note about discrimination caused through training datasets, is that simply removing protected categories of data from the model (such as ethnicity, gender, sexual orientation etc.) will not necessarily remove the opportunity for discrimination. This is because other attributes can be used to re-infer this information, such as occupation or home address.
What does the legislation say?
Both international and national legislation offers individuals protection from discrimination. For example, the European Convention on Human Rights (ECHR), UK Equality Act 2010, and the International Bill of Human Rights are all legal doctrines that enshrine the principle of anti-discrimination. In the UK context, the Equality Act 2010 safeguardsWhen transferring personal data to a third country, organisations must put in place appropriate safeguards to ensure the protection of personal data. Organisations should ensure that data subjects' rights will be respected and that the data subject has access to redress if they don't, and that the GDPR principles will be adhered to whilst the personal data is in the... people from discrimination, whether it is caused by automated means or not. And in data protection terms, both the UK GDPRThe UK General Data Protection Regulation. Before leaving the EU, the UK transposed the GDPR into UK law through the Data Protection Act 2018. This became the UK GDPR on 1st January 2021 when the UK formally exited the EU. and the EU GDPRRegulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons regarding the processing of personal data and on the free movement of such data (General Data Protection Regulation). (GDPRs) state their commitment to protecting the ‘fundamental rights and freedoms’ of data subjects in their first Article.
The GDPRs also contain specific rules governing the use of certain types of “automated decision-making”, with the aim being to mitigate the possibility of discrimination. Article 22 prohibits fully automated decision-making if those decisions have legal or similarly significant effects; for example, fully automated e-recruiting where there is no human review. We will talk more about Article 22 in an up-coming blog post, but for now it is sufficient to say that organisations must offer some kind of human review or intervention when it comes to fully automated decisions that have a significant effect on the data subjectAn individual who can be identified or is identifiable from data.. This will help to combat some AI-learnt discrimination, but does not mitigate any ingrained human biases that may be present.
How can we limit biases and discrimination?
One of the best ways to combat discrimination in AI systems is to conduct some kind of audit or assessment on them. In our ‘DPIAs and AIAs: The AI data controller’s best friend’ blog post, we highlighted how impact assessments can help combat bias, and this recommendation still stands. Aside from Algorithm Impact Assessments (AIAs) and Data Protection Impact Assessments (DPIAs), bias audits can also be an excellent method of assessing AI systems already in use. Instead of examining the algorithm itself, they compare the data that is inputted into the system with what comes out at the other end. These types of audits are sometimes referred to as ‘black box testing’. There are three ways in which to conduct these audits:
Although bias audits have traditionally been used by independent bodies to monitor the use of AI, there is nothing stopping organisations who deploy and use AI systems from conducting one themselves to test their own system or, more preferably, contracting with a third party to conduct one for them.
In addition to the above, having clear policies and procedures for the procurement of high-quality training and testing data is extremely important to guard against discrimination creeping into an AI system. Organisations should be satisfied that the data they have gathered or procured is representative of the population the system will be operating in. Checks should also be continued throughout the system’s lifecycle, with organisations implementing policies and Key Performance Metrics to ensure that systems continue to produce fair results.
Future developments
As AI and Machine Learning are still up and coming technologies, the rules regulating their use are constantly changing to keep pace with technological developments. In the EU context, a new AI Regulation has already been drafted and is expected to enter into force later this year. Whilst in the UK, the recently released National AI Strategy is beginning to pave the way for responsible AI innovation and regulation. However, regardless of these legislative developments, upholding the principle of non-discrimination must remain at the forefront of AI developers’ and AI data controllers’ minds so as to avoid falling foul of the laws protecting the fundamental rights of data subjects.
If you are implementing an AI system and need assistance complying with your data protection obligations, complete the form below or email us to find out how we can help.[/vc_column_text][/vc_column][/vc_row]
Fill in your details below and we’ll get back to you as soon as possible