The Southport Stabbings: How a False Social Media Post Fueled Unrest
The fatal stabbings at a children’s dance class in Southport on July 29, 2024, tragically ignited riots across England and Northern Ireland. While assigning blame for the violence to a single source is impossible, the rapid spread of misinformation on social media played a significant role, particularly false claims about the suspect’s identity. A BBC investigation reveals how a single, swiftly deleted LinkedIn post falsely alleging the suspect was an illegal migrant became the catalyst for widespread unrest, demonstrating the devastating power of online misinformation and its potential to incite real-world violence.
The investigation centers on a LinkedIn post by Eddie Murray, a local man who erroneously claimed a migrant had attacked the dance class, implying his own children were present. Although Mr. Murray later stated he was simply relaying information he had received, his post, one of the earliest to falsely label the suspect a "migrant," quickly spread like wildfire. Within hours, despite being removed by LinkedIn for violating its content policies, screenshots of the post had been viewed over two million times, according to BBC Verify analysis.
The post’s amplification was fueled by a combination of factors, including the information vacuum in the immediate aftermath of the attack. With official details from Merseyside Police understandably limited due to the suspect being a minor, online speculation filled the void. Mr. Murray’s post, presented as firsthand testimony, provided a seemingly credible narrative, readily seized upon and disseminated by various online actors.
The spread of the misinformation was further propelled by social media influencers, some with millions of followers, and accounts with purchased verification badges, lending an air of authority to their posts. These included individuals like Andrew Tate, who repeated false narratives on X (formerly Twitter), and Paul Golding, co-leader of the far-right group Britain First, who used the post to bolster anti-migrant rhetoric. The false claim about the suspect’s identity also appeared on an Indian news website, Upuknews, which presented the information as "confirmed," significantly widening its reach.
The false narrative escalated further with the emergence of a fabricated name for the suspect, "Ali-Al-Shakati," circulated by accounts known for spreading misinformation. This name, coupled with Mr. Murray’s post, was shared by prominent figures like Laurence Fox, leader of the Reclaim Party, further amplifying the false narrative to hundreds of thousands of viewers. This cascade of misinformation, often presented alongside calls for closed borders and deportations, created a volatile online environment ripe for real-world consequences.
The misinformation culminated in a riot in Southport on July 30, fueled by online calls to action from groups like "Southport Wake Up" on Telegram. The riot saw violent clashes and underscored the real-world danger posed by the rapid spread of online falsehoods. The incident also highlighted the exploitation of such events by far-right activists, like David Miles of Patriotic Alternative, who documented and encouraged the unrest, further fanning the flames of division and hatred.
The Southport incident serves as a stark warning about the dangers of misinformation in the digital age. The rapid dissemination of false narratives, amplified by social media algorithms and influential figures, can have devastating real-world consequences. The case underscores the urgent need for effective mechanisms to combat online misinformation and hold social media platforms accountable for the content they host. The government’s subsequent review of terrorism legislation and Ofcom’s findings highlight the need for legal reforms and greater platform responsibility to address the growing threat of online disinformation and its potential to incite violence. The delayed implementation of the Online Safety Act further emphasizes the urgency of these issues and the need for robust measures to protect the public from the harmful effects of online misinformation.