The Online Safety Bill, which sets tougher standards for social media companies, has become part of UK law after receiving Royal Assent.
From today tech companies such as TikTok, Facebook and Snapchat will be legally required to “prevent and rapidly remove illegal content”.
The law is aimed at making the internet a safer place and will put legal requirements on companies to stop children from seeing harmful content, such as posts promoting self-harm and eating disorders.
“Today will go down as an historic moment that ensures the online safety of British society not only now, but for decades to come,” said Michelle Donelan, the technology secretary.
The controversial piece of legislation will give Ofcom greater regulatory powers to take enforcement action against companies that breach the new law, including fines of up to £18m or 10% of their global revenue – whichever is higher.
“Ofcom is not a censor, and our new powers are not about taking content down,” said Dame Melanie Dawes, chief executive of Ofcom.
“Our job is to tackle the root causes of harm. We will set new standards online, making sure sites and apps are safer by design. Importantly, we’ll also take full account of people’s rights to privacy and freedom of expression.”
Long road to Royal Assent
The Online Safety Bill has had a protracted passage through parliament. First introduced to the House of Commons in March 2022, it was passed by the House of Lords last month after much wrangling.
After receiving Royal Asset from the King on Thursday – the final legislative hurdle that’s usually a formality – social media companies will need to be more transparent about the harm on their sites and publish risk assessments.
Privacy campaigners and tech companies had pushed back on elements of the Online Safety Bill.
Both WhatsApp and Signal threatened to leave the UK if the bill forced the companies to undermine end-to-end encryption, a technology that means only the sender and recipient of messages can read its contents.
Ministers later conceded it was not “technically feasible” to scan encrypted messages for explicit child abuse material without undermining privacy.
Child safety campaigners and other advocacy groups have welcomed the Online Safety Bill becoming law.
“Having an Online Safety Act on the statute book is a watershed moment and will mean that children up and down the UK are fundamentally safer in their everyday lives,” said Sir Peter Wanless, chief executive of the NSPCC, the children’s charity.
Tech companies will need to have sufficient systems in place to moderate harmful materials, such as age-checking measures.
Julie Dawson, chief policy and regulatory officer at Yoti, a digital identity company providing age verification tools, said: “The Online Safety Act is not about excluding children from the internet; it’s about giving them an experience appropriate for their age.
“Effective age assurance technology can now make this a reality. The act will also enable users to control what content they see and which users they interact with.”
Rocio Concha, director of policy and advocacy at Which?, described it as a “major step forward in the fight back against fraud by forcing tech firms to step up and take more responsibility for stopping people being targeted by fraudulent online advert”.
Adenike Cosgrove, VP, cybersecurity strategist at Proofpoint, said that “while this law aims to increase the baseline of what good protection looks like, there is ambiguity throughout which could leave the door open to continued online risks”.
Other groups said that the Online Safety Act does not do enough to tackle harmful information at the source.
“With this new law, our freedom of expression is left in the hands of self-interested internet companies, while dangerous health misinformation is allowed to spread rampant,” said Glen Tarman, head of policy and advocacy at Full Fact, a charity of fact-checkers.
“Despite numerous warnings, the Online Safety Act has fallen short of the stated ambition that it would make the UK the safest place to be online.”