Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Azerbaijan Mandates Measures Against the Publication of False Information in Media

July 14, 2025

Center for Countering Disinformation Criticizes The New York Times’ Portrayal of Kursk, Citing Decontextualized Neutrality as Disinformation

July 14, 2025

YouTuber Arrested for Propagating AI-Generated Misinformation Regarding Dharmasthala Mass Burial; Investigation Widens.

July 14, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»News»UK Online Safety Regime Deemed Ineffective Against Misinformation by MPs
News

UK Online Safety Regime Deemed Ineffective Against Misinformation by MPs

Press RoomBy Press RoomJuly 14, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Parliamentary Inquiry Exposes Flaws in UK’s Online Safety Act, Calls for Urgent Reforms After Southport Riots

A scathing report by the UK Parliament’s Science, Innovation and Technology Committee (SITC) has revealed significant shortcomings in the Online Safety Act (OSA), warning that the legislation is ill-equipped to combat the “algorithmically accelerated misinformation” plaguing social media platforms. The committee’s inquiry, launched in the wake of the 2024 Southport riots, concluded that the OSA, even if fully implemented at the time, would have likely failed to prevent the unrest, which was fueled in part by online misinformation. The SITC’s findings highlight the urgent need for stronger measures to address the spread of harmful content and hold social media companies accountable for their role in amplifying it. The report criticizes the existing legislation for its weak misinformation provisions and the opacity of social media algorithms, advocating for a more robust regulatory framework grounded in five key principles: public safety, freedom of expression, responsibility, data control, and transparency.

The SITC’s report directly implicates social media companies’ business models in the proliferation of misinformation. Their advertising-driven revenue streams incentivize engagement above all else, often inadvertently promoting harmful or misleading content. This dynamic is exacerbated by the opaque nature of their recommendation algorithms, which remain largely undisclosed to the public and regulators. While tech giants argue that harmful content damages their brands and repels advertisers, the SITC emphasizes the lack of a comprehensive evidence base to support this claim, due precisely to the secrecy surrounding these algorithms. MPs requested access to high-level representations of these algorithms but were denied, highlighting the “shortfall in transparency” that hinders effective regulation. The report urges the government to mandate transparency and explainability of these algorithms, enabling public authorities to understand and address the causal link between specific recommendations and real-world harm.

The SITC proposes a multi-pronged approach to tackle the issue, including stricter regulations for the digital advertising ecosystem and new duties for platforms to identify and mitigate misinformation risks. The committee recommends “clear and enforceable standards” for digital advertising to disincentivize the amplification of false information. Furthermore, it calls for collaboration between the government, Ofcom (the UK’s communications regulator), and platforms to identify and track disinformation actors and their tactics. Specifically, the SITC advocates for the development of tools to algorithmically deprioritize fact-checked misleading content or content from unreliable sources, while emphasizing the importance of preserving legitimate free expression. This careful balance between combating misinformation and protecting free speech underscores the complexity of the challenge.

Addressing the core business models that incentivize misinformation is crucial, according to the SITC. The report identifies a regulatory gap in the oversight of digital advertising, with the current focus primarily on harmful advertising content rather than the monetization of harmful content through advertising. To remedy this, the committee proposes establishing an independent body, separate from industry influence, to regulate and scrutinize the complex automated supply chain of digital advertising. This new entity, or alternatively an expansion of Ofcom’s powers, would be tasked with preventing the spread of harmful or misleading content through any digital means. This broadened scope recognizes that the issue transcends specific technologies or sectors.

While generative artificial intelligence (GenAI) played a limited role in the Southport riots, the SITC expresses significant concern about its potential to exacerbate future crises. The low cost, accessibility, and rapid advancements in GenAI enable the creation of vast quantities of convincing deceptive content. To preemptively address this threat, the report urges legislation to regulate GenAI platforms similarly to other high-risk online services. This legislation should mandate risk assessments, transparency regarding content curation and safeguards, user feedback mechanisms, and measures to prevent children from accessing harmful content. Crucially, the SITC recommends mandatory labeling of all AI-generated content with irremovable watermarks and metadata, This will help users identify synthetic media and mitigate its potential for misuse.

The SITC’s comprehensive report serves as a stark warning about the inadequacies of the current online safety framework in the face of evolving technological threats. The committee’s recommendations, including greater transparency of algorithms, stricter advertising regulations, and proactive measures to address the challenges posed by GenAI, offer a roadmap for strengthening the UK’s online safety regime. By adopting these principles and recommendations, the government can take meaningful steps to protect the public from the harms of online misinformation and prevent future incidents like the Southport riots. The report emphasizes the urgent need for action, recognizing that the online landscape is constantly evolving and that proactive measures are essential to safeguard the public interest.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

YouTuber Arrested for Propagating AI-Generated Misinformation Regarding Dharmasthala Mass Burial; Investigation Widens.

July 14, 2025

Misinformation Spreads Amidst Uncertainty Surrounding Air India Crash

July 14, 2025

Lobster Season Will Continue as Scheduled in 2025, Contrary to Online Rumors

July 13, 2025

Our Picks

Center for Countering Disinformation Criticizes The New York Times’ Portrayal of Kursk, Citing Decontextualized Neutrality as Disinformation

July 14, 2025

YouTuber Arrested for Propagating AI-Generated Misinformation Regarding Dharmasthala Mass Burial; Investigation Widens.

July 14, 2025

Grok AI Chatbot’s Vulgarity, Disinformation, and Hate Speech Ignite Controversy Surrounding Bias and Reliability

July 14, 2025

Maintaining Open Social Media Channels Despite Potential Challenges.

July 14, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

UK Online Safety Regime Deemed Ineffective Against Misinformation by MPs

By Press RoomJuly 14, 20250

Parliamentary Inquiry Exposes Flaws in UK’s Online Safety Act, Calls for Urgent Reforms After Southport…

TikTok Creator Addresses Disinformation Following False Death Report

July 14, 2025

The Dissemination of Disinformation on Social Media Platforms: A Condemnation.

July 14, 2025

Misinformation Spreads Amidst Uncertainty Surrounding Air India Crash

July 14, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.