Skip to content

Should the UK follow the EU AI Act? Key takeaways from UKTN’s AI safety roundtable

UKTN hosted a roundtable discussion to further the conversation on AI safety and regulation following the summits held by the UK government

AI Roundtable

With the arrival of a new UK government, the prospect of tighter rules on artificial intelligence is once again high up the agenda. But opinions differ on how the rules should take shape, who they should apply to and whether the UK should emulate the regulatory frameworks that have been developed overseas.

In September 2024, UKTN hosted a roundtable discussion on AI Safety in partnership with Shoosmiths and KPMG.

The roundtable aimed to further the conversation on best practices in AI safety and regulation following the summits held by the UK government.

The conversation was had under Chatham House rules, so no quotes or points raised have been attributed to any one participant in the discussion.

Chaired by senior reporter Oscar Hornstein, the roundtable featured contributions from:

  • George Margerson, deputy director for strategy and delivery at the AI Safety Institute
  • Sue Daley, director for technology and innovation at techUK
  • Laura Gonzalez, chief of staff at Synthesia
  • James Clough, co-founder and CTO of Robin AI
  • Mark Taylor, CEO of Automated Analytics
  • Alex Kirkhope, partner at Shoosmiths
  • Leanne Allen, partner and UK head of AI at KPMG

AI risks are not a monolith

Before the roundtable could discuss how to tackle the risks of AI, it first had to deal with the not-insignificant task of setting out what those risks were.

The general consensus was that while there were many legitimate concerns about how AI technology could harm society, it was unhelpful to view the broad range of risks covering an even broader range of technologies under a single banner.

The risks were therefore broken down as; long-term existential risks, the concern that AI could damage the fabric of society and the human race; and the immediate risks that are already present, such as job losses following AI adoption and copyright infringement via data training.

When looking at the existential risks, there was an agreement that while they should be taken seriously, the likelihood of an AI outcome in which “everybody dies” is probably reasonably low.

For now, the belief is that even at the most powerful end of models, there does not exist an AI technology that presents that kind of “critical and severe risk”. What was a concern, however, was that existing safeguards put in place by the developers of those models are “universally easy to break”.

Therefore, in tackling both the biggest and smallest dangers of AI, the government should be offering rules and guidance but there must be a great onus on developers to explain how they are managing risks.

For the most part, AI companies have had to rely on internal ethics boards to guide best practices in lieu of legislation. The consensus was that this would need to continue even after an AI act was passed in the UK because it would require a certain degree of interpretation.

Sectoral approach

Staying on the theme of separating different definitions of AI and danger, the group debated whether the sectoral approach – initially outlined by the previous Conservative government but generally maintained by the incumbents – was the best option.

It was accepted that different sectors will have different needs so there was certainly logic to the approach.

However, concerns were raised that sectors do not progress at the same speeds, with some calling for an overarching framework immediately, even if it were “quick and dirty” so that every industry could straight away have broad best practices enforced.

Others warned that the rapid development of AI means rushing that framework would most likely result in it quickly becoming outdated.

Regardless, there is demand from businesses for some general form of guiding policy that acknowledges rules might change but certain practices should be adopted as a starting point.

Another concern flagged regarding a sector-based approach is that the upskilling, recruitment and funding burdens on the dozens of regulators in the UK from Ofcom and the ICO to the Food Standards Agency and the Gambling Commission may be a major barrier.

There is, however, some precedent for different regulators sharing expertise and similarly, there is precedent for regulators having to come to grips with emerging technology areas like cybersecurity, so it was said to not be an impossible task, despite its difficulty.

EU AI act

In the absence of British AI legislation, it was suggested that AI firms would default to whatever the most clearly defined framework is, for example, the European Union’s AI Act.

There was some disagreement over the effectiveness of the act, with some arguing if AI developers have a “really transformative technology” that might not satisfy EU demands but is otherwise not breaking any laws, they will simply not release that product in the EU.

Whether that means EU residents won’t be able to access it, or are even encouraged to vote differently, the challenge was said to be for the EU, not the companies.

The enforceability of European AI legislation was also questioned. Some suggested that, like GDPR, it could become a rule book that appears strict but in real terms, the bulk of firms aren’t 100% GDRP compliant without facing punishments.

When asked if the EU’s comparatively rapid progress in passing AI legislation meant it had surpassed the UK’s leading position in the global AI conversation, there was clear agreement that an international leadership role was still very much up for grabs.

While some pointed to recent elections radically reshaping the EU compared to when it passed the AI act, others went so far as to claim the EU had removed itself from the leadership race by ignoring innovation in its regulation.

Register for Free

Get daily updates and enjoy an ad-reduced experience.

Already have an account? Log in