London-based Context.ai has launched its large language model (LLM) analytics service with $3.5m (£2.8m) in funding, co-led by Google Ventures.
Following the landmark release of ChatGPT, businesses have increasingly been turning to generative AI tools to automate customer service duties in the form of chatbots.
While they can seem an attractive way to efficiently handle large volumes of customer interactions, most companies have very limited experience with the technology, making it difficult to understand how effectively it is performing.
Context.ai is looking to solve this with an analytics product that studies the performance of AI chatbots. The Context platform tracks conversations made through a client’s chatbot, identifying areas in which it is working well and areas that require improvement.
“The current ecosystem of analytics products are built to count clicks. But as businesses add features powered by LLMs, text now becomes a primary interaction method for their users,” said Context.ai co-Founder and CTO, Alex Gamble.
“Making sense of this mountain of unstructured words poses an entirely new technical challenge for businesses keen to understand user behaviour. Context.ai offers a solution.”
The company claims that its analytics service can help companies reduce errors in their LLMs and monitor potential risks to the brand.
Theory Ventures joined Google in co-leading the round for Context.ai. Other participants included 20SALES and a number of angels such as Snyk founder Guy Podjarny, Synthesia founders Victor Riparbelli and Steffen Tjerrild and DeepMind head of product Mehdi Ghissassi.
“Context.ai solves a problem I’ve seen repeatedly — businesses developing AI products without a clear sense of who is using them and why,” said Tomasz Tunguz, founder of Theory Ventures.
“It’s challenging to iterate towards product-market fit without a strong understanding of who your users are and why they’re using your product. That’s what Context.ai provides.”
Yesterday, the UK’s cybersecurity agency issued a warning to firms interested in incorporating generative AI chatbots into their customer service departments, suggesting the technology could be manipulated by hackers to put businesses and consumers at risk.