It’s time to strengthen data protection law for the AI era

Avatar photo
keyboard, privacy, computer keyboard

How improving the Data Protection and Digital Information Bill can help ensure that AI and data work for people and society

Matt Davies

Data and AI are increasingly embedded in our economy, public services and communities. Algorithms can and are being used to make life-altering decisions, whether that’s about our employment, finances or exam results. In the UK, AI tools have been adopted by businesses in most sectors of the economy, with varying levels of uptake and success – and the rise of easily accessible general purpose AI tools like ChatGPT is accelerating this shift to a more automated, data-driven world.

While we have yet to understand the full extent of these AI systems’ impact on society and the economy, we do know that their increased use means that more and more people will be subject to automated decision-making in new and impactful contexts.  And we have ample evidence that leaving important decisions to technology can go badly wrong.

When technology is in charge

The Post Office scandal is perhaps the most prominent recent example of this in the UK context. Hundreds of postmasters were prosecuted for theft and fraud on the evidence of flawed accounting software, Horizon, with severe consequences for them and their families. ‘Horizon’ was not an AI or a data-driven system in the modern sense – it was a software system used for accounting and stocktaking. It does, however, illustrate many of the dangers of integrating complex technological systems into our economy at pace and uncritically.

Where important decisions – such as hire-and-fire processes, loan applications or judgements about eligibility for benefits – are delegated to automated systems, they become less transparent and harder to explain. Systematic bias, technical failings or individual circumstances that for which the system is not trained can result in unfair outcomes. And without meaningful human oversight, it can be difficult for people to appeal decisions when things go wrong.

A survey of the UK public carried out last year by the Ada Lovelace Institute underscored that the public recognises these challenges, and is concerned that an over-reliance on technology will negatively affect people’s agency and autonomy. More than half (59 per cent) of respondents said that they would like clear procedures in place for appealing to a human against an AI decision, with nearly as many (54 per cent) saying that they want ‘clear explanations of how AI works’.

Data protection – the ‘first line of defence’ against AI harm

These procedures do, to a certain extent, exist in law already in the form of the UK’s data protection regime: the UK GDPR. Most people associate data protection, fairly or unfairly, with ubiquitous cookie pop-ups on websites and the right to unsubscribe from unwanted email lists, but it is in fact a wide-ranging body of law that also provides several protections against harms from AI technologies.

In particular, Article 22 of the UK GDPR largely prohibits the automated processing of personal data for decisions about individuals that would have ‘legal or similarly significant’ effects unless there is a contract, consent or authorisation by law. In other words, it creates a requirement for human involvement in significant decisions, whether that’s a performance review at work or an application for a mortgage.

While limited and imperfect, this requirement provides important opportunities for mitigating possible harms, as well as a paper trail to support future investigations. The threat of fines from the Information Commissioner’s Office (ICO) or of legal action can incentivise organisations to take complaints seriously and to take action. A clear example of this is Deliveroo’s use of the ‘Frank’ platform to manage more than 8,000 gig worker delivery riders through automated decision-making. This practice was found to be unlawful by the Italian Data Protection Authority, which held that it was harmful to riders to have an automated system deciding who gets work and who doesn’t.

Data protection law provides an important ‘first line of defence’ against unjustified AI decision-making. We can and should layer other protections on top, such as requirements on AI developers to test and evaluate their systems. But data protection regulates AI at its foundation – data – providing an important bedrock for regulators to build on top of.

Data protection reform: the wrong direction?

Data protection is fundamental to regulating AI. But instead of strengthening it for the AI era, the Government’s proposed reforms to data protection law in the Data Protection and Digital Information Bill, currently before the House of Lords, will weaken existing protections.

The Bill removes the prohibition on most types of automated decision-making. Instead, it creates requirements for organisations using automated systems to have safeguards in place, such as measures to enable an individual to contest the decision. It also gives new powers to the Secretary of State to define the phrases ‘similarly significant effect’ and ‘meaningful human involvement’ in the context of automated decision-making.

See Also
vegetable salad in gray bowl

The effect of this would be to dramatically increase the contexts in which automated decision-making can be used. Instead of the onus being on organisations deploying automated systems to prove that their use is lawful, they will be assumed to be compliant unless it can be shown they have failed to implement the required safeguards appropriately. This shifts the burden of proof from organisations to those people affected by automated decisions, who would need to establish that safeguards were not respected and then complain to the ICO or seek remedy through the courts.

Independent legal analysis commissioned by Ada last year found that these changes are likely to erode the incentives that currently exist for organisations to properly assess and manage any systems being used to make automated decisions. With these reforms, it will be simpler to make automated decisions about people without their knowledge, and without seeking their consent.

Strengthening data protection for the AI era

It’s not too late for the Government to change course and strengthen our data protection laws. There are already some positive aspects to the Bill’s changes: for example, we welcome the Government’s intent to clarify wording around the contested terms of ‘meaningful human involvement’ and ‘similarly significant effect’. Meaningful human review is a key component for achieving appropriate oversight over automated decision-making, for protecting individuals from unfair treatment and for offering an avenue for redress. We think this could go further, specifying within legislation what ‘meaningful human review’ should consist of: for it to be meaningful, a review needs to be performed by a person with the necessary competence, training, understanding of the data and authority to alter the decision.

We also think the Bill provides an opportunity to provide people with greater transparency about when automated decision-making is being used, and the right to opt out of this. Critically, and as independent research from AWO has identified, at present people affected by automated decisions don’t have the right to receive detailed contextual or personalised information about how a decision was reached. This is crucial, because it is only with that detailed personalised explanation that someone affected by an automated decision can actually understand whether a mistake has been made, and meaningfully pursue redress.

We’re calling on the Government and Parliamentarians from all parties to work with us on making these improvements to the Bill. Data protection law is our first line of defence against AI harms: let’s make sure it’s fit for the AI era.

This article originally appeared in adalovelaceinstitute.org

What's Your Reaction?
Celebrate
0
Insightful
0
Like
0
Support
0

ethicalhour.com is owned and operated by Ethical Hour Ltd
© 2022 Ethical Hour Ltd. All Rights Reserved.

Ethical Hour Ltd is a company registered in England and Wales with Company Number 11165891. Registered office: The Oakley, Kidderminster Road, Droitwich, Worcestershire, WR9 9AY

Scroll To Top