One of the prime minister’s priorities this autumn is the successful delivery of the global AI safety summit. But what does success look like?
The AI summit is about safety. These safety concerns can be broadly split in two: national security, and economic risk. National security risk must then be split into military and civil risk.
On the military side, as evidenced by the standstill at the UN on the development of autonomous weapons – blocked at stages by the UK itself – there’s little hope of global consensus.
If China is at the summit, which it ought to be, the military aspect will be off the agenda. Geopolitical tensions between China and the US, along with the importance of arms manufacturing to the UK economy, will prove too tricky to navigate. If China isn’t there, some discussion will likely happen but it will probably be inconclusive.
On civil risk, we should make progress. This will likely mean a shared view of the types of risk we’re talking about. For example, bad actors using AI tools to build novel cyber weapons to enhance criminal activity. We may get as far as recognising the automatic creation of increasingly persuasive deep fake video, audio and text as being a problem.
But will the summit agree on what to do about it? No, not at this stage, and any agreement on how to deal with disinformation during an election period will probably interact with ongoing work via the G7 and therefore have to wait until December, as a courtesy to the Japanese, who currently chair the group.
The answer will be much the same on economic risk. We might agree about the potential effects of AI on our economies over the next decade or so, not least on jobs, but the reality is that we don’t really know for sure.
Governments must understand AI basics
This takes us to the whole point of the AI safety summit. Governments are catching up. We need to understand the basics in a way we didn’t before: what is it, what can it do, who has it, who can access it, and what can we do about the highest priority risks?
The fact our new UK national risk register, launched only two weeks ago, barely mentions AI is evidence of the knowledge gap. All we really know at this stage is that the probability of a high-risk outcome happening sooner rather than later has increased. We need to pay attention.
The key measure of success for this summit, therefore, is whether there will be agreement on how countries will work together going forward, and what role (and influence) the UK will have.
The UN is playing its part and the G7 is currently the preferred forum for the developed nations. But the UN will be slow to get going and will be limited in what it can enforce. The G7 doesn’t have the secretariat or expertise to do the work properly.
So, what next?
AI version of the G7
The last attempt to set up a new AI body, the Global Partnership on AI, failed. GPAI never ended up doing what it was supposed to do and is now an academic research function. The idea could be revived, not least if we agree to an AI equivalent to the Intergovernmental Panel on Climate Change, as Mustafa Suleyman and other experts have suggested we need.
A new regulator with enforcement powers is unlikely. The Americans won’t agree to a regulator that can tell US businesses what to do, especially when they fear that China will just do what it wants anyway.
So maybe we need something new? Perhaps an AI version of the G7 – a club of the most advanced AI nations.
There will need to be a debate about which countries would be in it. China really ought to be. We would also need to formally invite the tech companies, too. Some companies have already appointed their equivalent of a foreign secretary. More will have to.
And governments such as our own will have to recruit more AI experts into their civil service to ensure a balance of power. The new geopolitical era of nation states and big tech companies working together has begun.
The problem is, as it stands, other countries are a bit confused about what the UK is offering.
Rishi Sunak went big on announcing the summit, which I support, not least given I called for it many months ago. But having gone big on the announcement, he’s gone quiet on the detail.
The European Union botched their initial offer to the United States, with confused messages from commissioners Thierry Breton and Margrethe Vestager on what an EU-derived set of global rules on AI might look like. Britain, therefore, has a small post-Brexit opportunity window to get it right.
This autumn has a busy international agenda. If the UK is really going to play an international coordination role in tech, as it should, then London will have to put in the effort and resource to make it work. The appointment of Matt Clifford and Jonathan Black as AI Summit sherpas is good, but more effort is needed.
So, in that context, what might a good outcome be?
There’s a lot of work to be done, and this will take more than a one-off summit.
Getting agreement on a new club of the most advanced nations on AI, an AI-7, agreed in London with a secretariat based in London, should therefore be the minimum level of success that we expect from the summit.
Darren Jones is MP for Bristol North West and chair of the Business and Trade Committee.