What has the AI Safety Summit achieved? Let’s be honest it’s all hollow words and no action. All I see is dither and contradiction. Basically, it’s the worst possible outcome as far as I can see for both the UK technology sector and the progression of safe and secure artificial intelligence for the foreseeable future.
Having listened to the radio throughout the week, I’ve heard people saying that “we should address the near-term issues and that the focus on long-term risks is wrong”.
I personally would go further and say that we should directly act to mitigate the current harms and to create clarity so that actual work can be done within a clear framework of rules.
The current position is both actively harming normal citizens and allowing an unfair distribution of power to develop, whilst at the same time damaging our economic prospects by creating further doubt and uncertainty.
As the AI Safety Summit wraps up, the UK has ended up more marginalised and cornered by vested interests. Speaking truthfully, it appears that the primary driver and design of the event seems to have been to capture and manage media attention, and therefore it seems to be dominated by self-promotion and celebrity: cue Elon Musk’s arrival and the subsequent media storm.
My hopes had been that we would get a pragmatic and realisable agenda for regulation aimed at addressing the immediate harms and injustices that AI technology can facilitate as well as the benefits (something that was drastically understated in between rhetoric of bio and chemical warfare perpetrated by malevolent AI). This summit could have produced that. It has not.
We also need legislation that ensures that workers are not abused in the process of creating AI models. There are some very distressing stories of people being terribly underpaid to train models, and of people who are forced by their circumstances to take work that exposes them to traumatic images and other content in the quest to make AI model behaviour more acceptable.
The UK has previously legislated to manage these kinds of harms in many other situations, for example, by creating copyright laws that enable radio plays while compensating artists, and with anti-modern-slavery laws. The argument that we cannot do so for AI because somehow it is a special technology is just nonsense.
Given the investments that the UK has made in AI recently, this outcome is frankly disastrous. The Turing Institute and AI Council were specifically imagined as a way for the UK to create the institutional capability to understand, manage, and use AI technology.
These initiatives can now be seen as having been completely undermined. Nothing that has come out of them has been of any use to the UK. There is no capability despite all the money we have spent.
In my view, the AI Safety Summit’s failure to deliver a clear commitment to legislation that regulates the use of AI technology will have a far more significant negative impact for the UK than the cancellation of HS2.
Simon Thompson is the head of data science at GFT.