fbpx

UK race to lead on AI regulation poses startup challenges

AI regulation Image credit: No 10 Downing Street via Flickr

The government wants the UK to be a global leader in AI regulation. Prime Minister Rishi Sunak, a Brexiteer, views the UK’s position outside of the EU as an advantage to forge its own path in regulating everything from financial services to public listings.

Speaking at London Tech Week last month, Sunak said he wants to make the UK the “geographical home of global AI safety regulation”.

But the UK faces stiff competition from its European neighbours on AI policy. And there are concerns that divergence on AI safety will create extra bureaucracy for businesses if one set of regulations sets a higher bar.

The EU has already made progress on its own regulatory framework for AI and while it’s still early days, there is a perception from some industry players that it could become the leading standard.

“Right now, the AI law from Europe is the one that has the most substance to it,” said Tom Gruber, the co-founder and CTO of the generative AI startup LifeScore Music.

“I think that’s the one that everyone will follow until there’s some other regulation,” Gruber told UKTN. “I think when the EU sets the standard, it will also be much easier for everyone else to say, let’s do that.”

The UK has released its own white paper outlining its intentions to govern AI and for some firms, this remains in a strong position to deliver quality regulation.

Jesse Shemen, co-founder and CEO of AI voice dubbing startup Papercup – which last year raised £16m – told UKTN that “the UK is undoubtedly in a strong position to lead on AI regulation”.

Shemen praised recent initiatives from the government, such as the £100m investment towards the generative AI taskforce, which he described as a “cause for hope”.

Shemen did, however, urge the government to prioritise the needs of small AI companies as well as larger tech companies.

“Their voices must be heard if the UK wishes to position itself as an AI regulatory leader,” Shemen said.

While there is some confidence that the UK is fully capable of ironing out an effective system that both supports innovation and addresses safety concerns, it’s another matter entirely to have that system be respected as the global standard.

If the government opts to mirror the standards set by the EU and elsewhere, it raises the question of how the UK can demonstrate a competitive advantage in AI regulation.

Diverging AI rules

Whatever form the UK’s AI regulation goes on to take, it is unlikely to move the needle on the work already being done in the US and the EU.

This could create a situation in which AI companies operating globally have to navigate different and potentially entirely conflicting regulatory regimes simultaneously.

Alisa Patotskaya, CEO and founder of Immersive Fox – a generative AI startup that creates video presentations based on scripts – told UKTN that this could become a “difficult situation for multinational AI firms” if they have to “manage both UK and EU regulations, which could be a complex and time-consuming process”.

Patotskaya said the UK’s ambition to be a leader in AI regulation could be a “double-edged sword”, as it could both benefit and hinder multinational AI firms, “depending on the regulatory environment and the ability of the firms to keep up with the ever-changing landscape”.

Many of these AI firms, as software product-facing businesses, will push out their services across borders, leaving the industry particularly susceptible to regulatory roadblocks.

The government is taking steps to play an influential role on the global AI stage. Sunak said the UK will hot first global AI summit this autumn to develop a “shared approach” for addressing problems associated with the technology.

Meanwhile, James Cleverly, the foreign secretary, this week called for international cooperation on AI at the UN Security Council.

But opposition parties are not convinced. Darren Jones, Labour MP and chair of the Business and Trade Committee, told UKTN that he could see how navigating multiple regulatory geographies could present a challenge to AI companies.

“There are different layers of concern there. There’s concern around standard setting, interoperability and even a bit of national security,” Jones said.

“We ought to get to a position – not just between the US, UK and the EU, but with other countries as well – where there is agreement on the kind of technical standards that we would like to see happen across jurisdictions.”

Jones said that when it comes to data privacy and cybersecurity, the UK and EU’s approach is likely to “not differ greatly”.

However, he did warn that if the EU’s AI Act leads to the organisation banning certain technologies within AI, it would create difficulties for UK companies.