The AI Safety Summit was a success and right to focus on frontier risks

AI Safety Summit Image credit: Kirsty O'Connor / No 10 Downing Street

Now that the dust has settled on the first AI Safety Summit, we can consider the event a success. Kudos to Prime Minister Rishi Sunak and his team for bringing together attendees with such political and commercial heft and for securing global consensus on the need for governments to act on AI safety.

AI has the potential to transform our economy and society in both beneficial and dangerous ways. When industry leaders such as Sam Altman and AI experts such as Geoff Hinton warn us of the dangers we should take them at their word and not dismiss them as ‘doomsters’ or as being motivated primarily by regulatory capture.

Imagine if big tobacco executives had been more open about the dangers of smoking or big oil CEOs pushed for stricter carbon emission regulation? It is commendable that in AI, industry leaders don’t try to hide from the societal risks of their products. Achieving consensus and moving beyond debating whether the risks are real or not is a big step forward.

The summit also successfully kept its focus on the most important risks posed by frontier AI. Global agreement that the powerful AI models which will be developed over the next few years should be available for testing by governments represented real progress. No longer will the major players be marking their own homework.

As we move beyond the summit, it is the rapid pace of development of frontier models that must guide how governments act. The announcements coming out of the recent OpenAI developer conference reinforce just how quickly this technology is evolving, becoming faster, cheaper and more powerful.

Governments must retain the focus they have on the major safety challenges of AI and avoid attempts to ‘boil the ocean’ and regulate everything AI-related. The AI Safety Institutes announced by both the UK and US will be critical to both safeguarding against the threats we know about, as well as those we don’t.

Ian Hogarth has already assembled industry-leading expertise to the UK safety taskforce which is a good indication of the capabilities the UK Safety Institute will have.

At the international level, we welcome the fact that there will be two further international summits in the next year – in South Korea and then France. Some had suggested adopting an IPCC-type approach to AI. This would be a mistake.

Conferences held every few years with lengthy debates around non-binding commitments will make progress too slowly. The IPCC is 35 years old. If, in 35 year’s time AI safety is in the place that the climate change movement is today, we would have failed.

Now that there is consensus on the risks of AI and mechanisms to open up frontier models for testing to help mitigate these risks, we can place greater emphasis on the innovations that AI will bring.

This is after all a transformational technology on a par with the development of the internet or smartphone. From healthcare to education to industry, AI will touch virtually every aspect of society and unlock innovations with the potential to save lives, transform productivity and open up exciting new opportunities.

James Clough is the co-founder and CTO of Robin AI.