Since the release of ChatGPT last year, there have been widespread concerns within the community of lawmakers and regulators about the rapid growth of The use of computer systems to perform tasks normally requiring human intelligence, such as decision-making, speech recognition, translation etc. (AI) and machine learning models. According to ChatGPT statistics, they currently have 100 million active users, achieved within two months of its launch.
The adoption of AI presents both opportunities and challenges for organisations. Using AI can improve business efficiency, but many factors must also be taken into consideration. This blog discusses some of the data protection and privacy challenges faced by organisations wishing to use AI models and how they can remain compliant with the GDPR.
The Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation). (GDPR) is a set of regulations within European (EU) law on data protection and privacy for EU residents. It is a principle-based directive and one of the toughest pieces of privacy legislation in the world.
In the UK, following Brexit, the GDPR was retained as the The UK General Data Protection Regulation. Before leaving the EU, the UK transposed the GDPR into UK law through the Data Protection Act 2018. This became the UK GDPR on 1st January 2021 when the UK formally exited the EU.. Currently, the two regulations are similar, but organisations are braced for potential changes as the UK government prepares the Data Protection and Digital Information Bill (DPDI) for further readings. It is likely there will be updates on this later in the year.
But what does this mean for organisations? The GDPR and the UK’s proposed changes to data privacy rules require:
The Department of Science, Innovation and Technology published a white paper on 29th of March 2023 titled AI Regulation: A Pro-Innovation Approach. This sets out the government’s proposals to regulate AI in a pro-innovation manner. The reforms are intended to make data protection legislation easier to understand and implement. It has been suggested that businesses already compliant with the UK GDPR and Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons regarding the processing of personal data and on the free movement of such data (General Data Protection Regulation). are unlikely to require any immediate modifications.
Following the recent rush of AI evolution, there have been calls from industry to clarify the requirements for fairness in AI. The The United Kingdom’s independent ‘supervisory authority’ for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc. (The United Kingdom’s independent supervisory authority for ensuring compliance with the UK GDPR, Data Protection Act 2018, the Privacy and Electronic Communications Regulations etc.) has updated its guidance, which now includes details about AI governance and risk management, lawfulness in AI and the impact of Article 22 of the UK GDPR on fairness.
The most recent UK Data Protection Index results published last month, revealed 15% of UK DPOs’ organisations use chatbot AIs or large language models (LLMs) in their core business activities. With another 33% having taken it into consideration, this suggests the integration of AI in day-to-day core business functions is likely to increase in the future.
The UK’s approach to AI regulation contrasts with proposals set out in the EU The Artificial Intelligence Act (AI Act) is a regulation of the European Union, that introduces a common regulatory and legal framework for artificial intelligence technologies.
The European Artificial Intelligence Act (EU AI Act) is a proposed European law that is currently going through the EU legislative A series of actions or steps taken in order to achieve a particular end.. If passed, it will be the first law on AI by a major regulator anywhere in the world.
The EU AI Act has a firmer approach to AI regulation than the UK and has clear objectives to ensure AI systems are not a high risk to an individual’s Information which relates to an identified or identifiable natural person. and safety. The use of AI, with its specific characteristics can adversely affect a number of fundamental EU rights.
In the media in recent months, the Italian Data Protection Authority (Garante) issued a temporary suspension of OpenAI’s ChatGPT services due to certain privacy compliance concerns. The ban has subsequently been lifted but clearly shows the differing approaches between the EU and UK.
This divergence could pose future challenges for organisations operating in both EU and UK markets, and those seeking to expand into one or the other.
Artificial Intelligence chatbots, such as ChatGPT, use the technology of large language models (LLMs) to generate text. Human-like answers are given in response to questions or instructions that are typed into a chat box. ChatGPT-4 was released on March 14th 2023, followed by a wave of competitors including Microsoft’s Bing AI, Google’s LaMDA (Bard) and Chatsonic. There are also a growing number of AI coding assistant tools to help developers create code faster and more efficiently.
So, what is the problem?
The LLM algorithm is a deep learning model that requires vast amounts of data. It can recognise, summarise, predict and generate content based upon the knowledge acquired during training. The issue is the unquantified risk of personal data being included that would normally be protected under GDPR.
The GDPR’s principles include transparency, The second principle of the GDPR, requiring organisations to only process personal data for the specific purpose for which it was collected., The third GDPR principle, requiring organisations to only collect the personal data that is truly necessary to fulfill each purpose for data processing., In data protection terms, the concept of ensuring data is not incorrect or misleading. and storage limitations. But how are these reconciled with AI models?
Transparency – there is a perceived lack of transparency and interpretability of AI algorithms, which can make it difficult for organisations to understand how decisions are being made and to explain those decisions to customers and stakeholders.
Purpose limitation – to be compliant with the GDPR, personal data must only be collected for specified, explicit and legitimate purposes and not further processed. AI training data, can however, be collected through a process called data scraping. Also known as web scraping, this involves data collection from various sources on a massive scale, such as social media pages and the internet in general. The method is an efficient way to gain vast quantities of information but is incongruent with the purpose limitation principle.
Data minimisation – the GDPR requires the collection of personal data to be adequate, relevant and limited to what is necessary. AI machine learning usually requires a vast amount of data for training, making it challenging to align AI models with the data minimisation principle.
Accuracy – to follow the GDPR accuracy principle, any collected personal data needs to be accurate and, where necessary, kept up to date. This is impossible to achieve for most AI models, as personal data is not only unrecognisable once processed by the model, but also difficult to correct and sometimes inaccurate. Biased and inaccurate information is one of the major flaws of the recent AI chatbot models.
Storage limitations – to be compliant with the GDPR, personal data must be kept in a form which allows identification of data subjects for no longer than is necessary. Again, as with accuracy principle, this is difficult to achieve with many AI models.
There are an ever-increasing number of organisations that reply upon AI technologies to streamline processes and increase efficiencies. However, it is often challenging for businesses to ensure they are remaining compliant with the necessary data protection laws. One of the major problems, as previously discussed, is the way in which the AI models are trained and maintained, and the conflict with privacy laws.
To comply with the GDPR and any additional AI specific data protection laws, organisations need to explain the way in which data is being used by artificial intelligence. This is where an explainability framework can help. A crucial component to build trust with system users, staff and wider stakeholders and ensures Perhaps the most important GDPR principle, which requires controllers to take responsibility for complying with the GDPR and, document their compliance. for any personal data being processed.
Explainable AI (XAI) is a way of clearly describing an AI model
Explainable AI (XAI) is also known as Interpretable AI or Explainable Machine Learning and includes processes and methods to allow humans to understand the decisions and predictions made by the AI model.
AI explainability services provide established techniques to support organisations in meeting transparency and accountability data protection requirements. Here are some of the areas that would need to be addressed:
XAI services offer organisations a defined way in which to describe an AI model’s expected impacts and potential biases, as well as detailing its fairness, transparency and outcomes. With an explainability framework, organisations can better understand how to adopt AI technologies and integrate them into their processes.
AI Explainability (XAI) Services are crucial for responsible AI development and are likely to become an increasingly important service as we head to the future.
For more news and insights about The DPO Centre, follow us on LinkedIn
If you would like to learn more about The DPO Centre’s AI Explainability (XAI) Service, please contact us by completing the form below.
Fill in your details below and we’ll get back to you as soon as possible