Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

BC Wildfire Service Cautions Against Misinformation and Uncertainty Propagated by AI-Generated Wildfire Images

August 5, 2025

Clarifying the Role of X-Linked Inheritance in Female Health

August 5, 2025

European Union Editors Form Coalition Against Disinformation

August 5, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Social Media»The Impact of Social Media Algorithms on Misinformation: Evidence from Dr. Elena Abrusci
Social Media

The Impact of Social Media Algorithms on Misinformation: Evidence from Dr. Elena Abrusci

Press RoomBy Press RoomMarch 3, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

UK Grapples with the Rising Tide of Online Misinformation: A Critical Examination of Regulatory Frameworks and the Need for Enhanced Strategies

London, UK – The digital age, while offering unprecedented opportunities for information sharing and global connectivity, has also unleashed a torrent of misinformation and disinformation, posing significant challenges to democratic processes, public health, and societal cohesion. Dr. Elena Abrusci, Senior Lecturer in Law at Brunel University London, argues in her recent submission to the UK Parliament’s Innovation and Technology Committee that current policy responses are insufficient to address this complex issue. Her analysis, presented as part of the Committee’s inquiry into social media, misinformation, and harmful algorithms, sheds light on the limitations of existing legislative frameworks and proposes a more comprehensive approach to combating the spread of harmful content online.

Dr. Abrusci contends that existing measures, including content moderation efforts by social media platforms, media literacy programs, and even nascent regulatory efforts, have failed to significantly curb the proliferation of misinformation and its detrimental impact. The advent of generative AI, while exacerbating the problem by enabling the rapid creation and dissemination of synthetic media and manipulated content, hasn’t fundamentally altered the nature of the harm inflicted on society. The core challenges remain: distinguishing truth from falsehood in an increasingly complex information landscape, protecting individuals and communities from the harmful consequences of false narratives, and safeguarding the integrity of democratic institutions against manipulation.

The recently enacted UK Online Safety Act, a landmark piece of legislation aimed at regulating online content, has come under scrutiny for its potential shortcomings. Dr. Abrusci highlights several areas of concern, including the Act’s perceived failure to strike a delicate balance between freedom of expression and the imperative to prevent harm. Vague definitions within the legislation create ambiguity regarding the scope of its application, potentially leading to inconsistent enforcement. Furthermore, the Act is criticized for lacking sufficient enforcement powers for key regulators, potentially undermining its efficacy in holding social media platforms accountable for the content they host.

Central to the debate surrounding online content regulation is the challenge of balancing the fundamental right to freedom of expression with the need to protect individuals and groups from harm caused by misinformation. Dr. Abrusci emphasizes that while free speech is a cornerstone of democratic societies, it should not come at the expense of the safety and well-being of others. The proliferation of harmful content, including hate speech, disinformation campaigns, and targeted harassment, necessitates a nuanced approach that recognizes the potential for online platforms to be weaponized to inflict real-world damage. The question remains: how can regulatory frameworks effectively address harmful content without unduly restricting legitimate expression and open dialogue?

One specific area where the Online Safety Act falls short, according to Dr. Abrusci, is its treatment of deepfakes – highly realistic, AI-generated synthetic media that can be used to create convincing but fabricated videos and audio recordings. The Act lacks clear guidance on how service providers should address deepfakes, potentially leaving room for inconsistent enforcement and the risk of overreach. The potential for deepfakes to be used for malicious purposes, including defamation, political manipulation, and the creation of non-consensual pornography, underscores the urgency of developing robust regulatory mechanisms to mitigate their harmful effects.

Finally, Dr. Abrusci argues that a concerted effort involving multiple regulatory bodies is crucial to effectively combat the spread of misinformation. The task cannot be solely delegated to Ofcom, the UK’s communications regulator. A coordinated approach involving other relevant bodies, including the Electoral Commission, the Advertising Standards Authority, and the Equality and Human Rights Commission, is essential to address the multifaceted nature of the problem. This multi-agency approach should involve sharing expertise, coordinating enforcement efforts, and developing comprehensive strategies to tackle misinformation across different domains, including political advertising, online commerce, and social media platforms. The challenge lies in forging a collaborative framework that respects the distinct mandates of each regulatory body while ensuring a unified and effective response to the pervasive threat of online misinformation.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

The Southport Riot: A Catalyst for Anti-Immigrant Disinformation

August 4, 2025

Social Media Misinformation Exacerbates Crime.

August 4, 2025

Escalating Risks to the Civil Service

August 3, 2025

Our Picks

Clarifying the Role of X-Linked Inheritance in Female Health

August 5, 2025

European Union Editors Form Coalition Against Disinformation

August 5, 2025

BC Wildfire Service Cautions Public Regarding Misinformation and AI-Generated Imagery During Wildfires

August 5, 2025

Jim Jordan Investigates Spotify’s Role in Misinformation

August 5, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

News

Obstacles to Polio Eradication in Pakistan and Afghanistan: Addressing Misinformation and Fabrications

By Press RoomAugust 5, 20250

The Unfolding Crisis of Polio Eradication: A Global Health Campaign Under Scrutiny The global campaign…

South Africa Rejects US Farm Attack Report as Disinformation.

August 5, 2025

Resurgence of Polio: How Data Falsification, Misinformation, and Missteps Undermined Eradication Efforts

August 5, 2025

Virginia’s 2020 Redistricting Amendment: A Flawed Process Undermined by Misinformation and Ill-Prepared for Political Threats.

August 5, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.