Deceptive AI-generated content amplified conspiracy theories and harmful narratives amid numerous major elections, according to a new report from the Alan Turing Institute.
The artificial intelligence research organisation conducted a year-long study to view how the rise of generative AI tools might have impacted the democratic process in a year that saw a significant number of international elections.
According to the report from the Institute’s Centre for Emerging Technology and Security (CETaS), there was not enough evidence to concretely prove AI content measurably affected the results of events like the recent US presidential election.
However, the group said fears remain that AI threats and the surrounding hype are “eroding trust in the information environment, allowing harmful narratives to thrive”.
For the report, researchers from CETaS analysed all major elections this year, noting numerous examples of viral AI disinformation, such as AI bot farms mimicking voters and spreading conspiracies through false celebrity endorsements.
The report has therefore called for action in four key areas. The Institute has recommended increasing barriers to deter the creation of disinformation, improving tools to detect deepfake creations, greater guidance on how the press reports major incidents online and strengthening societal capabilities for exposing disinformation.
“More than 2 billion people went to the polls this year, providing us with unprecedented evidence of the types of AI-enabled threats we face and a golden window of opportunity to protect future elections,” said Sam Stockwell, lead author and research associate at the Alan Turing Institute.
“We should be reassured that there’s a lack of evidence that AI has changed the course of an election result, but there can be no complacency. Researchers and others monitoring these issues must urgently be given better access to social media platform data, in order to effectively assess and counter the most serious malicious voter-targeting activities moving forward.”
The Alan Turing Institute has said it is difficult to truly know the impact of AI on recent elections, however, UKTN reported in July that while voters in the UK general election in July were likely not swayed by bots, there were significant fears about the potential harm of AI-powered disinformation.
Many of the more prominent AI labs have implemented safeguards in products like ChatGPT, Gemini and Meta AI to prevent non-consensual mimicry of public figures.
Last week, UKTN revealed that Haiper, a London-based generative AI startup backed by Octopus Ventures, had significantly weaker guardrails in place and allowed the generation of what the other AI firms have described as “harmful”.