The UK data watchdog has warned tech firms that it has privacy concerns about how personal information is used to develop generative AI chatbots.
The popular technology, used in programmes like advanced chatbot ChatGPT, generates content based on user prompts.
These programmes are trained on large volumes of web content and the responses are designed to mimic real conversations. The Information Commissioner’s Office (ICO) has expressed concern over generative AI’s potential to access personal information as part of this training.
“There really can be no excuse for getting the privacy implications of generative AI wrong. We’ll be working hard to make sure that organisations get it right,” said ICO director of technology and innovation, Stephen Almond.
He said large language model (LLM) software has “the potential to pose risks to data privacy if not used responsibly”.
“It doesn’t take too much imagination to see the potential for a company to quickly damage a hard-earned relationship with customers through poor use of generative AI,” he added.
Almond’s concerns follow Italy’s recent decision to ban ChatGPT, the first Western country to make such a move. The European nation’s data protection authority said it was concerned about how the technology could affect privacy.
OpenAI, the company behind ChatGPT and other popular generative AI programmes, said it complied with data and privacy laws.
The increased attention from the ICO, as well as other regulatory bodies worldwide, comes as a thousands of people, including Elon Musk and Apple co-founder Steve Wozniak, signed a letter calling for a pause in AI research. However, it has since emerged that some signatories were fake and others have withdrawn their support.
The UK government has said its goal is to become a world leader in the AI sector, citing the implementation of a landmark regulatory framework as part of its overall strategy. It published an AI white paper last month, laying out the groundwork for future regulation in the UK.