Skip to content

AI firms should not solve ethical problems: ElevenLabs safety chief

Aleksandra Pedraszewska said academics and regulators had a much better understanding of sensitive ethical questions

Credit: Simon Hunt / UKTN

The head of safety at a top London AI firm has said generative AI businesses should not be left to solve ethical problems alone amid greater scrutiny of how platforms keep consumers safe.

Aleksandra Pedraszewska, who joined audio generation business ElevenLabs in February, suggested that commercial motivations could obscure ethical decision-making by individual companies.

She said: “I think most importantly I don’t consider myself or I don’t think anyone working in any safety role in a tech company should be solving any ethical problems. This is something that we need to give to organisations that focus on specific problems without the impact of the commercial motivations that might be there within the company.

“The main goal behind the AI safety function of ElevenLabs is adopting all the solutions that are already easily available and can be deployed straight away to make sure you’re not missing anything that is obvious. This should be easy for any generative AI company: there are already so many reliable solutions when it comes to content moderation and when it comes to understanding how to signal behaviour of bad actors that are behaving in a way that is problematic.

“What I’d like to think my primary objective is making sure that we’re not missing, not applying solutions that have been available and are coming from academic research, working with academic researchers that have so much better an understanding of some of the policy and ethics questions that we can have, even though we are working with some of those products on a daily basis.”

Pedraszewska gave the remarks at the TechCrunch Disrupt conference in San Francisco, at which the topic of enhanced regulation of artificial intelligence businesses was high up the agenda.

The remarks came in the wake of allegations that a chatbot developed by Character.AI was responsible for the suicide of a teenage boy.

The parent of fourteen-year-old Sewell Setzer accused the company of failing to implement proper safety measures to prevent her son from developing an inappropriate relationship with a chatbot that caused him to withdraw from his family.

Character.AI said it does not comment on pending litigation but that it is “heartbroken by the tragic loss of one of our users.”

Top AI firms have also seen the departure of senior leaders in recent months amid concerns over safety considerations.

OpenAI chief technology officer Mira Murati resigned abruptly in September, following the earlier exit of chief research officer Bob McGrew, vice president of research Barret Zoph and chief scientist Ilya Sutskever, who co-led the firm’s “superalignment” team focused on AI’s existential dangers. January saw the resignation of the head of a team at Google that reviewed new AI products for compliance with its rules for responsible AI development.

ElevenLabs reached unicorn status in January after attracting the likes of Sequoia Capital in a $80m funding round.

In August the company opened a new office in Soho’s Wardour Street, will be the new base for ElevenLabs’ 20 London staff, with plans to double its headcount in the UK capital within six months.

ElevenLabs, which can replicate human voices for text-to-speech audio, said “in the coming years” it expects its London team to grow to 100 people.

Pedraszewska added that the company prioritises working with the voice acting community, ensuring that all voice actors whose personal data is used to train models are appropriately remunerated for their work. The firm has spent more than $1 million paying voice actors since its inception.

Topics

Register for Free

Get daily updates and enjoy an ad-reduced experience.

Already have an account? Log in