Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Russian Disinformation Campaign Targets Moldova’s Upcoming Elections

September 25, 2025

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

CPD: Russia Disseminates Disinformation Regarding Global Conflict Following Alleged Downing of NATO Aircraft

September 25, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Social Media»An Assessment of the Online Safety Act’s Suitability for its Intended Purpose
Social Media

An Assessment of the Online Safety Act’s Suitability for its Intended Purpose

Press RoomBy Press RoomFebruary 16, 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

The UK’s Online Safety Act: A Critical Examination in the Wake of Recent Riots

The recent riots that erupted across the UK have cast a harsh spotlight on the role of social media platforms in amplifying hate speech, disseminating misinformation, and even coordinating public disorder. The timing of these events coincides with the enactment of the UK’s Online Safety Act 2023, legislation heralded as a landmark achievement in making the online world safer. This juxtaposition naturally raises questions about the Act’s efficacy, its preparedness for handling crises like the recent riots, and whether it possesses the necessary tools to tackle the complex challenges posed by online harms.

While the Online Safety Act aims to establish a robust framework for online safety, its impact on the recent riots and similar future events remains limited due to its phased implementation. The Act’s core duties, requiring online service providers to assess and mitigate risks associated with both criminal content and content harmful to children, are not yet in force. These duties hinge on guidance and codes of practice from Ofcom, the designated regulator, which are still under development and not expected to be finalized until early 2025. This delay means the Act’s provisions could not have directly impacted the recent unrest, even if the nature of the content fell within its scope.

The Online Safety Act primarily focuses on systemic improvements in content moderation and risk management by online service providers. It does not empower Ofcom or the government to dictate the removal of specific content items. While hate speech, incitement to violence, and the organization of riots would likely fall under the Act’s purview once operational, its effectiveness in a rapidly evolving crisis situation remains uncertain. The Act’s emphasis on systemic change rather than individual content removal presents a challenge in responding to the immediate and dynamic nature of online activity during crises. This stands in contrast to the broadcasting regime, where Ofcom has the power to make decisions about the acceptability of specific programs.

The Act’s ability to combat mis- and disinformation, a significant factor in fueling the recent riots, is further complicated by the removal of provisions relating to content harmful to adults during the legislative process. This means misinformation is only addressed if it constitutes a criminal offence or if prohibited by a platform’s terms of service, which the Act doesn’t mandate. This leaves a potential gap in addressing misinformation that, while harmful, may not reach the threshold of criminality. In the case of the riots, the spread of false information about the attacker in Southport demonstrably exacerbated tensions. The Act’s focus on assessing content against specific thresholds, rather than the amplification and spread of such content, presents a further limitation.

Existing legal mechanisms offer some avenues for addressing harmful online content, but they too have limitations in crisis situations. The video-sharing platform rules within the Communications Act 2003 mandate safeguards against illegal content and hate speech but, similarly to the Online Safety Act, do not empower direct content removal by Ofcom. These rules primarily apply to UK-based services, excluding many major international platforms. The video-on-demand rules, while potentially applicable to certain user-generated content deemed "on-demand programme services," primarily target UK-based providers and have seen limited enforcement by Ofcom.

The Online Safety Act does include a provision for "special circumstances" where the Secretary of State can direct Ofcom to take action in the face of a public safety threat. This mechanism, however, is primarily focused on transparency, requiring companies to explain their handling of the threat, rather than enabling direct content removal. It relies on the Secretary of State’s initiative and is reactive rather than proactive. Whether this provision is sufficient to address the rapid spread of harmful content in a crisis remains to be seen.

The recent riots and subsequent criticism of the Online Safety Act, including concerns raised by London Mayor Sadiq Khan, underscore the urgent need for a reevaluation of its scope and effectiveness. Addressing the current limitations, particularly its ability to act decisively in crisis situations and grapple with the complexities of mis- and disinformation, is crucial. The government’s stated intention to “look more broadly at social media” in light of the riots offers an opportunity to strengthen the Act and ensure it is equipped to address the evolving challenges of online safety in a rapidly changing digital landscape. This includes clarifying the threshold for intervention, streamlining processes for rapid response in crises, and potentially expanding the scope of the Act to address content harmful to adults. The evolving nature of online harms requires continuous adaptation of regulatory frameworks to ensure the online environment is both safe and conducive to free expression.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Turkish Media Outlets Disseminate Information Contradicting the Joint Media Platform

September 25, 2025

Combating Gendered Disinformation in Rural India Through a Novel Partnership

September 25, 2025

Rapid Dissemination of Misinformation Following Shootings: The Challenge of Real-Time Evidence and Ideologically Driven Narratives

September 25, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Combating Misinformation About Judaism: A New Podcast by Two Teenagers

September 25, 2025

CPD: Russia Disseminates Disinformation Regarding Global Conflict Following Alleged Downing of NATO Aircraft

September 25, 2025

The Impact of Flagged Misinformation on Social Media Engagement

September 25, 2025

Paige Bueckers’ On-Court Impact Drives Historic Social Media Milestone with Dallas Wings

September 25, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Contested Transitions: The Siege of Electoral Processes

By Press RoomSeptember 25, 20250

Moldova’s Democracy Under Siege: A Deep Dive into the Information War Moldova, a small Eastern…

Navigating Misinformation: Introducing “The Reality Check” Series

September 25, 2025

Telegram Serves as Primary News Source for Half of Ukrainian Population, Survey Reveals

September 25, 2025

Obama Denounces Trump’s Dissemination of Harmful Misinformation Regarding Autism and Tylenol.

September 25, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.