UK Grapples with Online Disinformation Amidst Nationwide Unrest: A Call for Stronger Tech Regulation

The United Kingdom is facing a wave of civil unrest ignited by the rapid spread of online disinformation, prompting the government to consider strengthening its regulatory powers over tech platforms. The recent violence, sparked by a tragic knife attack in Southport, has exposed the vulnerability of communities to false narratives amplified by social media and exploited by extremist groups. The government’s current focus remains on prosecuting individuals inciting hatred and violence online, with the first sentences related to hate speech postings linked to the disorder already being handed down. However, Prime Minister Keir Starmer has acknowledged the need for a broader review of social media’s role in the crisis, signaling a potential shift towards stricter regulation.

The Online Safety Act (OSA), passed in September 2023, is the UK’s primary legislation for regulating online content. It mandates platforms to remove illegal material and protect users from harmful content, including hate speech. However, the law has been criticized for its perceived inadequacies in addressing the spread of disinformation. London Mayor Sadiq Khan has called the OSA "not fit for purpose," echoing concerns that the legislation lacks the teeth to effectively combat the rapid dissemination of false narratives. Furthermore, the OSA’s implementation is still in its early stages, with the regulator currently consulting on guidance. This delay has led some to argue that a review is premature, while others insist that the law’s inherent weaknesses necessitate immediate action.

The current wave of violence underscores the urgent need for effective measures to counter disinformation. False claims about the Southport attacker’s identity, erroneously portraying him as a Muslim asylum seeker who arrived via a small boat, rapidly spread online, fueled by far-right activists. This misinformation has been widely linked to the escalating unrest. While arrests related to inciting racial hatred through social media posts are a crucial first step, questions remain about how to address the broader issue of platforms facilitating the spread of such harmful content. The government’s potential review of the OSA signals a recognition that existing regulations may be insufficient to tackle the scale and speed of online disinformation.

The OSA’s effectiveness has been questioned from multiple angles. Critics argue that its drafting is flawed and that it fails to address the fundamental business models of platforms, which often profit from engagement driven by outrage. Furthermore, the previous Conservative government removed clauses targeting "legal but harmful" speech, the category under which disinformation often falls, citing concerns about free speech. However, former minister Damian Collins disputed this rationale, arguing that the removed provisions were primarily intended to enforce transparency in platforms’ application of their own terms and conditions, especially regarding content that could incite violence or hatred. This highlights the ongoing debate about balancing free speech with the need to prevent the spread of harmful misinformation.

Mainstream platforms like Facebook and X (formerly Twitter) have terms of service prohibiting hate speech and inciting violence. However, their enforcement of these rules has often been criticized as inconsistent and reactive. Platforms typically claim they remove content once reported, effectively adopting a strategy of plausible deniability. A law that regulates their resources and processes for content moderation could compel a more proactive approach. The European Union’s Digital Services Act (DSA) provides a potential model. EU enforcers have been investigating X’s handling of disinformation since December, and recent events in the UK could influence these proceedings. The EU has indicated that X’s response to the UK unrest, including its handling of harmful content, may be considered as part of its ongoing investigation.

The UK government’s potential review of the OSA could lead to similar pressures on platforms. Once fully operational, the OSA is expected to require larger platforms to consistently enforce their terms of service, including provisions against misinformation. This could force platforms to adopt more proactive measures to prevent the spread of harmful content, rather than relying on reactive takedowns. The current wave of unrest, fueled by online disinformation, serves as astark reminder of the urgent need for robust and effective regulation. The upcoming review of the OSA presents an opportunity for the UK to strengthen its approach, learning from the EU’s experience and addressing the challenges posed by the evolving landscape of online information.

Share.
Exit mobile version