The UK will resist regulating AI regardless of legal advancements from the US, EU and China, according to a leading conservative figure in AI.
Speaking at a conference held by the Financial Times, Viscount Camrose – the minister for AI and intellectual property – said that in the “short term” there would be no new UK AI laws to ensure regulation doesn’t hurt industry growth.
Camrose said that rushing to implement regulation could not only stifle innovation but also fail to make the technology any safer.
“You are not actually making anybody as safe as it sounds,” he said. According to Camrose, preventing innovation hurts safety measures because “innovation is a very very important part of the AI equation”.
The UK is aiming to be a world leader in the AI sector and has been criticised for falling behind on regulation in comparison to the EU, which has already passed an AI act, and the US, which issued an executive order on AI safety.
Similarly, China has already implemented measures to regulate AI, which Camrose feels is not the best decision.
“I would never criticise any other nation’s act on this…But there is always a risk of premature regulation.”
At the start of the month, the UK hosted the AI Safety Summit that saw China, the US, European nations and other relevant parties convene to discuss the risks and possible safeguards for AI.
No official policy came from the summit, however, it did spawn a joint agreement, called the Bletchley Declaration, that the 28 nations in attendance would commit to collaboration in AI risk management.
The government may see AI regulation as a problem for the future, but issues are already arising from the technology.
This week, the VP of audio at British unicorn Stability AI resigned over the company’s position that AI models should be legally allowed to use copyrighted music for training purposes.