Artificial intelligence and personal data
This means that AI is being used to support, or to make decisions about individuals and this involves the processing of personal data which in turn requires compliance with the GDPR.
Education institutions and their students are using AI in a huge variety of ways. Some institutions are already using AI to personalise learning and to deliver content that is tailored to a student’s needs. In January 2019 Staffordshire University launched a chatbox dubbed “Beacan”, which can be downloaded by students as a mobile app and has the capability to provide personalised and responsive information to students. It can order replacement student cards and connect students with lecturers, all of which requires the processing of personal data. There is also a significant amount of research that is being done using AI, research that wouldn’t be possible using the human brain – the analysis of masses amount of data in relation to climate change, for example.
Now the Information Commissioner’s Office (ICO) and the Alan Turing Institute have opened consultation on their first draft of Artificial Intelligence (AI) guidance on decisions made with AI.
Rooted within the GDPR, the guidance sets out four key principles. These principles must be considered when developing AI decision-making systems. They are:
- Be transparent: make your use of AI for decision-making obvious and appropriately explain the decisions you make to individuals in a meaningful way.
- Be accountable: ensure appropriate oversight of your AI decision systems, and be answerable to others.
- Consider context: there is no one-size-fits-all approach to explaining AI-assisted decisions.
- Reflect on impacts: ask and answer questions about the ethical purposes and objectives of your AI project at the initial stages of formulating the problem and defining the outcome.
Following on from the above principles, use of an AI system to process personal data will usually satisfy the qualifying criteria under the GDPR for when a Data Privacy Impact Assessment (DPIA) is required. This is because it entails innovative technology and the automated processing of data. A DPIA is used to identify data protection risks to individuals’ interests in relation to processing; it is an ongoing process and should be embedded into an institution’s processes. Any risk should be minimised and an assessment made of whether any remaining risk is justified.
The guidance can be found here. The ICO will be consulting on it until 24 January 2020 and the final version of the guidance will be published later in the year, taking the feedback into account.
For advice or guidance on any other commercial or legal issue, a member of our team can walk you through everything. Click here to discuss.