UK Regulator Threatens Social Media Giants with Sanctions Over Child Safety Failures
A recent survey conducted by Ofcom, the UK’s communications regulator, has revealed a startling statistic: 22% of children aged 8 to 17 misrepresent their age on social media platforms, claiming to be 18 or older. This alarming finding comes as the Online Safety Act (OSA) mandates stricter age verification measures for online platforms, a requirement set to take effect in 2025. Ofcom has issued a stern warning to social media companies, emphasizing that failure to implement robust age verification systems will result in enforcement action, including potential fines of up to 10% of their global revenue. The regulator stressed the urgency of addressing this issue, highlighting the increased risk of children being exposed to harmful content when they can easily pose as adults online.
The pervasiveness of this deceptive practice among young users underscores the inadequacy of current age verification methods employed by social media platforms. Despite recent initiatives by some companies to enhance online safety for young people, such as Instagram’s introduction of "teen accounts," the Ofcom survey and anecdotal evidence suggest these measures are failing to prevent children from circumventing age restrictions. Interviews conducted by the BBC with teenagers revealed the ease with which they could falsify their age during account creation, often encountering no verification checks whatsoever. This vulnerability was further demonstrated by the BBC’s own experiment, where they successfully created accounts on several major platforms using false ages without any challenge.
Ofcom recognizes the critical need for more robust age assurance systems. While the regulator has not yet prescribed specific technologies for age verification, it is actively evaluating various methods and will provide further guidance to the industry in the coming year. Ofcom emphasizes that simple self-declaration of age is insufficient and that platforms must implement “highly effective age assurance” measures to comply with the OSA by July 2025. This call for action comes amid growing public concern over the potential harm faced by children online, fueled by tragic incidents like the deaths of teenagers Molly Russell and Brianna Ghey.
The Molly Rose Foundation, established in memory of Molly Russell, has described the survey findings as "incredibly shocking" and indicative of the failure of social media companies to enforce their own existing rules. The foundation stresses the urgent need for effective age verification to protect children from harmful content, particularly suicide and self-harm related material. While some platforms, like TikTok, claim to actively remove suspected underage accounts and explore new technologies for age verification, others, including Snapchat and Meta (owner of Facebook, Instagram, and WhatsApp), have remained silent on the issue. X, formerly Twitter, also did not respond to requests for comment.
This lack of responsiveness from some major players underscores the challenge of ensuring online safety for children. The UK government faces growing pressure to strengthen the OSA, with some advocating for more drastic measures, such as banning social media access for children under 16, a policy recently adopted in Australia. The effectiveness and implications of such a ban remain to be seen, but the debate highlights the urgent need for a comprehensive and effective approach to protecting young people online.
The coming months will be crucial for social media platforms to demonstrate their commitment to child safety. Ofcom has made it clear that it expects significant improvements in age verification processes and will not hesitate to enforce the OSA, potentially imposing substantial financial penalties on non-compliant companies. The industry must prioritize the development and implementation of robust age assurance solutions to protect vulnerable young users from the potential harms lurking online. The clock is ticking, and the onus is on social media giants to prove they can create a safer digital environment for children.