UK’s Online Safety Act Under Scrutiny After Summer Riots Fueled by Misinformation
The UK Parliament is grappling with the effectiveness of its Online Safety Act in combating the spread of misinformation, following a series of riots sparked by false social media posts last summer. The riots, which erupted after the tragic stabbing of three children in Southport, were fueled by inaccurate claims about the perpetrator’s identity and background, rapidly disseminated online. This incident has brought into sharp focus the limitations of the Act in addressing the complex issue of online misinformation and its potential to incite real-world violence.
A recent hearing of the House of Commons Science, Innovation and Technology Committee highlighted the divergent interpretations of the Act’s scope in relation to misinformation. Baroness Jones of Whitchurch, representing the Department for Science, Innovation and Technology (DSIT) and the Department for Business and Trade, asserted that misinformation and disinformation fall under the purview of the Online Safety Act, specifically through its "illegal harms" and "children’s" codes. This interpretation suggests that platforms are obligated to remove illegal disinformation and protect children from harmful content, including misinformation.
However, Ofcom, the communications regulator tasked with enforcing the Act, presented a more nuanced perspective. Mark Bunting, Ofcom’s online safety strategy delivery director, acknowledged that the Act does not explicitly cover all forms of misinformation. While the Act introduces a new offense of "false communications with intent to cause harm," proving intent can be challenging. This ambiguity raises concerns about the Act’s ability to effectively prevent the spread of harmful misinformation that falls short of outright illegality.
The committee also heard from tech platforms themselves, who claimed that even if the Act had been in full effect during the riots, their response would not have been different. This assertion underscores the limitations of the Act in compelling platforms to proactively address misinformation, particularly in situations where the content may not be explicitly illegal but still contributes to a climate of fear and violence. The lack of clear legal precedent and the difficulty of proving intent create a loophole that allows platforms to evade responsibility for the spread of harmful misinformation.
The debate around the Act’s effectiveness is further complicated by the removal of provisions relating to "legal but harmful" content for adults. This decision, made by the previous government, limits the Act’s scope in addressing misinformation that may not be illegal but still poses a risk to society. The case of the Southport riots exemplifies how such misinformation can rapidly escalate into real-world violence, exploiting existing societal tensions and prejudices.
The government’s own guidance accompanying the Act states that mis- and disinformation are covered when they are illegal or harmful to children. However, the definition of "harmful" remains subjective and open to interpretation. This ambiguity creates challenges for both regulators and platforms in determining what constitutes harmful misinformation and how to effectively address it. Civil servant Talitha Rowland, director for security and online harm at DSIT, acknowledged the multifaceted nature of mis- and disinformation, highlighting the difficulty of establishing a single definition that encompasses its various forms and potential harms.
The ongoing debate within the UK Parliament underscores the complex challenge of regulating online misinformation. While the Online Safety Act represents a significant step towards holding platforms accountable for harmful content, its limitations in addressing misinformation have been exposed by the Southport riots. The lack of clear legal definitions, the difficulty of proving intent, and the removal of provisions relating to "legal but harmful" content create loopholes that allow misinformation to proliferate online. The government and Ofcom now face the difficult task of clarifying the Act’s scope and strengthening its enforcement mechanisms to effectively combat the spread of harmful misinformation and prevent future incidents of online-fueled violence. The current lack of case law interpreting the Act adds further complexity to this challenge. As the digital landscape continues to evolve, so too must the legal framework governing it, to ensure that online spaces are safe and do not become breeding grounds for real-world harm.