The AI Safety Institute (AISI), a state-backed body tasked with developing the UK’s understanding of the risks of artificial intelligence, has added a Google DeepMind employee and an Oxford professor to its research team ranks.
Geoffrey Irving has been announced as the research director for the institute, which was founded last November off the back of the AI Safety Summit.
Irving’s background includes work for Google DeepMind, where he is currently a safety researcher, OpenAI, where he spent two years as a member of the technical staff and Google Brain, the old AI department of Alphabet.
“Being a world leader in AI safety is only possible if we attract the world’s top AI safety talent to work in the institutions we are building,” said Michelle Donelan, the tech secretary.
“Geoffrey Irving more than fits that bill – bringing a wealth of expertise from his work at Google DeepMind as he now steps up to become the AI Safety Institute’s research director.
“I have made it my mission to drive the Institute forward, and it is now a world-leading organisation despite only being launched a few short months ago at the AI Safety Summit.”
The institute has also announced that Professor Chris Summerfield from the University of Oxford will join its research team.
DSIT said the AISI team now contains 24 researchers and will aim to triple that by the end of the year.
“As we build more powerful AI systems, it is essential for us to be able to coordinate between the private sector, government and civil society,” said Irving.
“This requires deep capability within government. Over 2023 I have been very impressed with the progress made by the UK via the AI Safety Institute and AI Safety Summit and am excited to join the team.”
The announcement comes as AISI chair Ian Hogarth visits the US to deepen collaboration between the countries on AI safety. The trip includes visits to California and Washington.