Starting this July, tech companies operating in the UK will face major consequences if they fail to shield children from harmful content online.
Britain’s broadcasting regulator Ofcom has announced a sweeping new set of rules under the Online Safety Act, aiming to transform children’s experiences on the internet by enforcing stricter safety standards on websites and apps — especially social media and gaming platforms.
“Children in the UK will have safer online lives, under transformational new protections,” said Ofcom.
What the New Rules Mean
From July 25, platforms accessed by children will need to:
- Filter out content promoting suicide, self-harm, eating disorders, and pornography
- Protect against misogynistic, violent, or hateful material
- Tackle online bullying, abusive messages, and dangerous social media challenges
- Implement effective age checks to block underage access to adult content
- Allow children to block users, disable comments, and reject group chats
- Promptly remove harmful content when flagged
Strong Enforcement Powers
Tech companies will be given until July 24 to complete a risk assessment of how their services impact children. After that, the rules kick in — and non-compliance could cost them dearly:
1. Fines of up to £18 million or 10% of global revenue
2. In extreme cases, bans from the UK market
Ofcom’s CEO Melanie Dawes called it “a reset for children online,” promising safer feeds and better control for young users navigating digital platforms.
This marks one of the strongest online safety regimes in the world, pushing global tech giants to take child safety more seriously or risk being shut out of one of Europe’s biggest digital markets.