Skip to content

Imperial spinout building AI model testing system raises £4m

Safe Intelligence said stricter testing of AI models has become vital

Safe Intelligence
Image credit: Lubo Ivanko / Shutterstock

An Imperial College London AI spinout that has built a new way of testing AI models has closed a £4.15m seed funding round. 

Safe Intelligence, which claims its validation method can detect deep fragilities, has been backed by Hermann Hauser’s Amadeus Capital Partners. 

According to the company, the ability to build AI models has massively outpaced the capability to determine their quality, particularly at scale. 

Tech companies like China’s DeepSeek have demonstrated the speed and lower cost barrier to building new models, presenting a greater risk of deploying unvalidated products. 

The concern is made all the worse has businesses and governments move towards incorporating AI models in essential sectors such as finance, healthcare, mobility and defence. 

 Safe Intelligence said its verification system could be deployed across these industries to improve safety. Offering the example of monitoring an AI investment model, the more sophisticated it is, the higher the risk of unexpected outcomes, and therefore inappropriate investment or lending decisions, resulting from combinations of inputs. 

The startup said its system could verify that cases would resolve correctly despite the variance of inputs. 

“Safe Intelligence offers deep analysis of machine learning models enabling users to gain actionable insights that can make the model more robust,” said Alessio Lomuscio, CTO of Safe Intelligence.  

“Today we have reliable hardware and very dependable software. We want to help society use robust AI as well.” 

Steven Willmott, the recently appointed CEO of Safe Intelligence added: “Existing software development is based on a foundation of unit testing all system components. As these components become machine learning based, we can no longer be completely sure of their behaviour and, hence, the systems we build.  

“Our mission is to provide tools to radically improve our ability to validate machine learned components and get back to a world where we can have high confidence in our systems.”   

The UK government’s AI Security Institute – recently rebranded from the AI Safety Institute – has thus far been responsible for testing large language models, which it does via the open source Inspect AI platform.

VCs OTB Ventures and Vsquared Ventures participated in the seed round alongside lead investor Amadeus. 

“Banks, insurers and other corporates using complex AI models internally are holding back from applying them to frontline, customer-facing or regulated activity because of fears that their models are not robust enough,” said Dr Manjari Chandran-Ramesh a partner at Amadeus. 

“Safe Intelligence can identify fragilities, tackle them, and unleash the power of AI across industries from transport to finance.” 

Topics

Register for Free

Bookmark your favorite posts, get daily updates, and enjoy an ad-reduced experience.

Already have an account? Log in