Article

Ringside at the AI bullfight

James Flint, Senior Consultant

MicrosoftTeams-image (5)

In September the AI circus came to town in the form of this year’s CogX Festival at London’s O2 Dome. Apart from last year’s event I’ve attended CogX every year since its inception, and it’s always hugely ambitious, fun and fascinating. 2023’s edition was no exception, and had a great line-up of speakers and events, though it couldn’t quite do justice to the enormity of the Dome’s main arena, which swamped the audiences for even the biggest name speakers with rows of empty seats; the atmosphere was much better in the side venues, the O2 Indigo and the Magazine, where the smaller scale made for more of a buzz.

This being the year that large language models (LLMs) were unleashed on an unsuspecting world, there was a great deal of emphasis on AI safety. CogX co-founder & CEO Charlie Muirhead set the tone by setting a key question for the conference to consider: “How do we get the next 10 years right?” For most of the speakers on day one – including Director of the Center for Humane Technology Tristan Harris, “Sapiens” author Yuval Noah Harari, philosopher Stuart Russell, Conjecture CEO Connor Leahy and the UK’s Secretary of State for Science, Innovation and Technology Michelle Donelan – the answer seemed to be that we should put a lot more emphasis on AI safety. 

Harris offered a five-point safety plan that included various risk mitigation recommendations, the most powerful of which (and the least likely to be implemented) was for legislation to ensure that AI companies were held liable for the bad outcomes produced by their technology. Hariri’s prescription had three surprisingly practical tenets, chiefly designed to counter the threat of our being overwhelmed by a tsunami of AI misinformation: do not steal (data); ban fake humans; and deny the right of freedom of speech to bots.

When talking about regulation, however, the most common comparison was with aviation regulation – speaker after speaker brought it up. You wouldn’t expect to get in an unregulated airliner that had a 10% chance of killing you, would you? So why wouldn’t you expect the same of AI? But while this example is useful for emphasising that regulation is, in fact, generally a good thing, in the context of AI it also has the effect of focussing the mind on the existential risks of technology, a.k.a. the “Terminator” scenario: the idea that AI will become at some point smarter than us and then, deliberately or not, squash us like bugs.

The industry itself is very keen on stressing this existential threat. OpenAI’s Sam Altman stood up in the US Senate earlier this year to call attention to it; Geoffrey Hinton stood down from his post at Google to do the same, and the British government has been drawn into convening a conference on the subject at Bletchley Park in November. But there’s a growing body of opinion that all this cape waving is little more than a piece of misdirection designed to distract the regulatory bull and guide its horns away from the matador’s sensitive parts. 

“Bring in regulations that stop our machines from killing us all!” chant the tech bros. But they chant it even though there is absolutely zero evidence that this might happen now or ever, given that LLMs have no actual intelligence or agency at all. What these captains of the AI industry are less vocal in calling for is the more forceful application of existing data protection and copyright regulations that would prevent them hoovering up people’s data by any means necessary and using it to train their models. And yet this regulation exists, and its proper application would do a great deal to mitigate the bad outcomes of current AI technology: biased systems, misinformation, surveillance capitalism, theft of intellectual property and so on. Yes, those things. In all this worry about our future AI overlords, let’s not lose sight of those, and their utility for other potential (human) overlords, in the here and now.

This division was revealed in another fault line that ran through the conference: the question of open source. One of Harris’s five risk mitigation measures referred to above was that we should limit open source deployments of AI, a suggestion that was calmly contested by Emad Mostaque, founder of Stability AI (home of the image generator Stable Diffusion and one of the conference’s main sponsors).

While Harris thinks that AI is too dangerous to be left to the uninitiated and should be handled only by big companies whose size ensures they can be regulated and pinned with appropriate liability for the works they’ve wrought, Mostaque argues that, as with the internet before it, transparency and accountability are better served by democratising the technology, and that restricting it just invites the kind of regulatory capture described above. 

When considered in the context of Mustapha Suleyman’s insight that “within a year or two you’re going to have an AI in your pocket that can prioritise, organise and plan for you,” this question of open or not, controlled or not, becomes extremely pressing. If this is the likely short-term destination of the current phase of AI tech, then the economic and cultural stakes are very high indeed – and we can immediately see why the market leaders might be very keen to throw as many obstacles in the road in front of potential competitors as they can.

Suleyman, who co-founded of DeepMind with Demis Hassabis and, more recently, Inflection AI with Reid Hoffman, was the only person at CogX I heard mention the forthcoming EU AI Act (and data protection regulations more generally) in positive terms, as a sensible way to ensure better outcomes from AI. Other speakers that kept their feet on the ground were Jimmy Wales (of Wikipedia fame), who spoke in a very measured way about the on the actual – rather than the vaunted – utility of the technology and why we shouldn’t let it become a replacement for human judgement, and Stephen Fry, who gave a characteristically elegant address about the impact of AI on the creative industries, although he did go on to frame AI in terms of the myth of Prometheus, which somewhat steered things back towards the existential.

Of course, for some the existential risk isn’t a bug, it’s a feature. Jürgen Schmidhuber is the inventor of the neural net architecture known as long short-term memory (LSTM) and also, thanks to the “Werner Herzog” style of his delivery, one of my favourite CogX speakers. Firm in his belief that everything that’s going on in the industry is just an expression of the universe’s growing complexity, Schmidhuber thinks that there’s no need to be afraid of AI at all, as however intelligent it becomes, it won’t compete with us here on Earth. It is space that AI and robots will go forth and colonise, an environment for which they will be far better suited than us boring old flesh bags, who – like it or not – are condemned by our inability to adapt to low gravity and high radiation to a future confined to Earth’s biosphere. 

The nice thing about Schmidhuber’s vision is that it means a win-win for everyone. Assuming the extraction of all the power and water and rare earth metals required to create this fantastic space-faring intelligence doesn’t accelerate climate change so much that we don’t all live to see it. That would be a shame.

Act now and speak to us about your privacy requirements

Start a conversation about how Privacy Made Practical® can benefit your business.

Click here to contact us.

Back to top