Blog

We need to talk about consent

Written by James Flint | Sep 8, 2023 4:35:48 PM
I’m greatly looking forward to next week’s CogX Festival of all things AI (at which, full disclosure, I am moderating a session). It looks to be the biggest CogX yet, which is saying something for an event that’s not known for its modesty in matters of scale. In past years it has occupied Tobacco Dock and most of the buildings around King’s Cross’s Granary Square; this year it’s dispensed with such piffling venues and gone direct to the mothership: the O2 dome.
 

The timing is perfect. If ever there was a year to stage an all-out AI jamboree, it’s 2023. The arrival of ChatGPT in the public consciousness this year has changed the game. The AI boom is here.

What’s particularly interesting for me, as a data protection professional with a background that includes, variously, tech journalist, tech entrepreneur and science fiction author, not to mention a lifelong fascination with the topic under discussion, is that the current moment brings all these things together. The sheer scale of the data gathered to train the latest generation of large language models (LLMs) like ChatGPT, Bard, Claude and Llama has thrown a very strong spotlight on the legal and regulatory processes surrounding data protection and intellectual property, and that’s before we even get to all the sci-fi tropes that are seemingly being played out for real.

Multiple cases are currently going through various courts, testing the notion of whether or not many of these processes still have legal purchase. Two of the key ones are the complaint recently filed with the Polish regulator, maintaining that OpenAI is in breach of pretty much every principle in Article 5 of the GPDR, and the rather better publicised Sarah Silverman-led lawsuit in the US suing OpenAI and Meta for copyright infringement.

A related issue that’s also being pushed to breaking point by AI and by social media more generally (itself often the source of AI training data) is the issue of consent. Cookie notices have been a subject of fierce debate for some time, and my fellow panellists at CogX – William Seymour and Marc Serramia Amoros –have both been studying ways in which consent and privacy preferences can be handled by voice-only interfaces for AI assistants such as Amazon’s Alexa and Apple’s Siri. The practical problems involved here are challenging enough, but there are more abstract concerns, too. How, for example, can I sensibly consent for use of my data to be ingested for specific purposes into an LLM, which is a general purpose tool by its very nature?

And, should I withdraw my consent, how can I then get my data removed from the model, given that it is not stored in any traditional, discrete format but merely as additional weights to the connections in the network?

Such conundrums have led many commentators to ask if the whole concept of consent, when it pertains to digital services is, if not wholly inappropriate, then at least not fit for purpose. Some even argue that its prevalence is an indication of effective regulatory capture by big tech, which likes it because it pushes the onus of responsibility away from the service provider and onto the service user – meaning you and me.

As an illustrative comparison, when we buy a medicine, are we asked to consent to the risks of all the potential side effects? When we buy a car, do we sign a consent to accept the risks of the brakes not working?

No. These risks are reduced ahead of time on our behalf by complex regulatory frameworks. Pharmaceuticals and motor vehicles are subject to years of testing and compliance in order to reduce risk for customers and to help manufacturers manage their potential liabilities. This doesn’t reduce these risks to zero. But it does reduce them to a level that, socially and politically, we collectively judge to be acceptable (though the exact location of the boundary is, and always should be, a subject of argument).

In other words, the regulations do the hard work of accepting consent on our behalf so that we don’t have to, precisely because judging the level of risk in these kinds of situations is such a complicated business that the average individual customer cannot possible be expected to make any kind of genuinely informed decision for themselves.

Via the mechanism of regulation, then, the body politic enforces risk reduction and mitigation to the point where consent is no longer a requirement. In these realms we sign consent forms only when we’re doing something out of the ordinary – engaging in a medical trial, say, or paying to drive go-carts round a track for fun. These are voluntary risks that we don’t have to take; getting in a car to drive to work, by contrast, is deemed so ordinary a part of life that asking people to consent to it would be meaningless.

The meaningless of consent in these situations derives from a power imbalance, in this case the lack of agency the individual has when confronted with a social norm. Choosing not to use a car to get to or do their jobs might not, for many people, count as a meaningful choice. Theoretically they could walk or run or cycle or ride a horse, but there are many cases in which the inefficiencies involved would make the choice no choice at all. We’ve seen something of the anger this kind of false choice can engender with the recent expansion of the London ULEZ. When you have no choice but to consent, it’s not consenting at all.

In his recent book “The Digital Republic”, the lawyer and commentator Jamie Susskind argues convincingly that many of the consents requests from us when we use digital technology – from cookie notices to the unwieldy terms & conditions on every app – are false consents in precisely this sense. And while he conscientiously traces out an edifying and eminently sensible structure of legal, political and philosophical ideas in support of this claim, he also acknowledges that he doesn’t need to look much further than the GDPR itself, which forbids the use of consent as a legal basis in situations where a significant imbalance of power between the data controller and the data subject exists.

This kind of imbalance is most commonly seen in the workplace: employers cannot ask employees to consent to many of the data protection requirements demanded by their jobs, because of the power imbalance between them. The boss can be as friendly as friendly can be, but at the end of the day the employee has little choice but to consent due to the suspicion that failure to do so would – consciously or not – jeopardise their future chances of contract renewal or promotion.

Susskind’s point is that, just as with the need to hand over a certain amount of information about ourselves so that we can do our jobs, so with the need to hand over data in order to use of many of the digital tools we now have no choice but to use in order to navigate our lives. In both cases a power imbalance exists; in the latter situation the power is wielded, partly by the oligarchy of giant tech firms that produce these things, and partly by social norms. The argument that I have the choice to switch to a competitor’s product is spurious when that all competitors are, in this respect, much the same. But at the same declining to use banking apps, or email, or social media accounts altogether in protest at this would make it all but impossible for me to continue to operate as a functioning member of society. I’m therefore caught in a classic double bind between the devil of data exploitation and the deep blue sea of social death, exactly the kind of realm in which consent is no consent at all.

Susskind’s solution is one that will be familiar to the data protection community: it is to enforce privacy – and justice - by design and default, ramping up the principles that are listed in Article 5 of the GDPR and a few others besides until they are properly enforced regulations that put the onus for abiding by them on companies providing the services rather than on the users using them (he suggests various mechanisms for doing this, which I’m not going to elaborate here, but I do recommend reading his book). Is he right? If he is, is such a thing even achievable? Does the forthcoming EU AI Act move us in the desired direction or does it fail to do so, given that it doesn’t directly address the issue of consent? These are questions I’m going to be thinking about during the conference and discussing during the SAIS round table on Wednesday morning. Why not come and join us, and pitch in? We’re all potential complainants, on this one.