With just two weeks to go until the UK hosts the first international summit on AI safety, the Conservative’s AI boss spoke to UKTN about the focus of the event, criticisms of it, and the symbolic importance of its venue.
Jonathan Berry, the Viscount Camrose and a hereditary peer, was appointed by the prime minister in March as the under-secretary to the tech department. His brief includes AI and intellectual property.
In an interview with UKTN, Berry explained why the AI Safety Summit will focus on addressing the greatest potential risks artificial intelligence could pose in the future.
Critics of the summit, including Labour’s shadow tech ministers, have argued that the event’s focus on frontier AI risks ignores the more immediate risks such as job security, disinformation and biased AI models.
Frontier AI has been defined as “highly capable foundation models, which could have dangerous capabilities that are sufficient to severely threaten public safety and global security”.
“The important thing to recognise here is that frontier risks are a piece of a puzzle, they’re a very important piece, but they’re just one piece,” Berry said.
“There are risks that are very much with us now, there are risks that are a bit further out. We’re focusing for this summit on the frontier but that doesn’t mean we’re not focusing on the here and now.”
The government on Monday revealed the agenda for day one of the event, with topics including the use of AI in election disruption, the erosion of social trust and the exacerbation of global inequalities.
According to the viscount, the UK was not alone in looking to address concerns over AI technology. Berry said there are “a lot of international bodies focusing on other aspects of AI safety” and it is therefore important the UK did not add unnecessary input to “an already crowded space”.
“There are a range of very real concerns, some more imminent than others… whether that’s the very extreme existential frontier risks or risks closer to home right now,” Berry added.
“It all feeds into the same set of thinking and the same programme and we feel as a country we have a lot to offer in this space and the AI summit is just part of that.”
The Tory peer said the summit would be “just a step” on what would be a “very important journey” in determining how best governments and businesses can manage the rapidly developing technology.
Among the primary aims of the summit is to foster an international collaborative understanding of how to manage AI regulation.
The prime minister’s stated goal is to position the UK as the “geographical home of global AI safety”, as well as a world leader in innovation in the rapidly evolving space.
Rishi Sunak is hardly alone in that ambition, however, and will face rivals in the AI race at his own summit.
Berry argued that Sunak’s lofty ambitions of UK AI supremacy won’t prevent it from having meaningful collaboration with the international community on regulation.
“We all have a deep interest in managing frontier AI risks and that interest goes far beyond where different nations choose to specialise in AI,” Berry said.
“I don’t think any player would come along to this and hold back for fear of giving one country or other some advantage in AI safety. I just don’t feel that would be within the mindset of anyone attending.”
Berry said that any country likely to have a significant impact on the AI sector should be willing to participate in a collaborative effort, regardless of existing political tension.
As details of the summit trickled out after its first announcement, there was a big question mark over whether China would be invited.
It became a more contentious issue following revelations that a parliamentary researcher was arrested earlier this year on suspicion of spying for China. Last month, the government confirmed that it had invited China to the AI summit.
Berry argues that China’s size in the AI space is too large to be ignored in a conversation about internationally collaborative regulation.
“If China, the second-largest AI power on Earth were not to be joining us I think that would be very much more concerning,” said Berry, who added that snubbing China risked causing a “bipolar world in terms of approaches to frontier AI risk” that would “put all of us at much greater risk”.
“When you’re on the frontier and when you’re talking about risks with the potential to be extremely destructive for our species, I think other considerations are largely put aside.”
Bletchley Park symbolism
Announced in August, the AI Safety Summit will be held in Bletchley Park, a historic landmark in Milton Keynes that once was the headquarters of legendary WW2 codebreakers, including Alan Turing.
“I think that’s enormously important symbolically. Bletchley Park was a place where technology was used to guarantee freedom for the world, and I think that symbol is very powerful and I certainly find it powerful and moving,” Berry said.
Though undeniably a site of great historic significance, Bletchley Park’s size has limited the summit’s capacity.
Matt Clifford, Entrepreneur First CEO and the prime minister’s representative for the AI Safety Summit, revealed in October that the conference will be limited to “about 100 attendees” split roughly between cabinet ministers, CEOs from top AI companies, academics and civil society representatives.
Berry conceded the size of the venue was a “downside of Bletchley Park”.
“It’s quite small so that forced us into having a much smaller number of people than certainly we would like to attend,” he said.
The peer did, however, claim the size limitations gave the tech department a “real focus and discipline around attendance and around agenda that will take the whole thing forward”.
Instead, the government is running a series of industry roundtables in the run-up to the summit. It held one of those on Tuesday with trade association techUK. The meeting brought together over 100 business leaders from across the AI supply chain and was attended by Clifford, Berry and Michelle Donelan, the science and tech secretary.
The two-hour meeting is understood to have highlighted the need for broad engagement from both the public and private sectors to establish trustworthy AI governance.