The government has laid out five objectives for the AI Safety Summit, which include agreeing on a shared understanding of risks with international stakeholders and establishing a “forward process” for international collaboration.
The UK is continuing preparations for the first international summit, which is set to take place at Bletchley Park on 1 November. The government had previously faced criticism for a lack of information but has now outlined the summit’s core objectives.
In addition to agreeing on a shared understanding of the risks posed by AI and processes for international collaboration, the aims include: appropriate measures that organisations need to take to increase safety; identifying areas of collaboration on AI research; and a showcase for ensuring how safe AI development will allow AI to be used for good globally.
According to the government, the summit will also build on existing work on driving international AI safety development done at the UN, the OECD, G7, and Global Partnership on Artificial Intelligence (GPAI).
The summit has been given the goal of building on this work by agreeing on the practical next steps to address the risks posed by AI. The risks include cybersecurity, disinformation, job protection and bias in AI models.
The AI Safety Summit was first announced by Prime Minister Rishi Sunak in June, after a diplomatic visit to Washington with President Joe Biden.
Writing in UKTN, Labour MP Darren Jones argued that while the summit may succeed in identifying some of the key economic and civil risks posed by AI, it is unlikely any solid agreements would be made on what to do about them.
There are question marks over whether China will and should be invited to the summit.
Jones said that China ought to be included. Recent reports revealed that China is likely to have a presence at the summit, though not without some pushback from Western allies.