Social Media Giants Fail to Curb COVID-19 Misinformation, Study Finds

A new report by the Center for Countering Digital Hate (CCDH) reveals a troubling landscape of unchecked COVID-19 misinformation proliferating across major social media platforms. The study, conducted between April and May 2020, involved ten volunteers who meticulously documented hundreds of posts containing false cures, anti-vaccine propaganda, and 5G conspiracy theories related to the pandemic. These posts were subsequently reported to Facebook, Twitter, and Instagram. Shockingly, the vast majority of these flagged posts – a staggering 90% – remained online without any warnings or removals, raising serious concerns about the efficacy of these platforms’ content moderation policies and their commitment to combating the spread of harmful falsehoods.

The CCDH report highlights the alarming scale of the problem. Volunteers identified and reported 649 posts containing demonstrably false information about COVID-19. These posts peddled dangerous misinformation, ranging from dubious "cures" involving aspirin dissolved in hot water or high doses of vitamins, to unfounded claims linking 5G technology to the virus’s spread. Such misinformation not only erodes public trust in legitimate health advice but also potentially endangers lives by encouraging individuals to engage in risky behaviors or reject scientifically-proven preventative measures. The report emphasizes that the continued presence of these posts, despite being flagged, demonstrates a systemic failure by social media companies to effectively address the issue.

While both Facebook and Twitter have publicly touted their efforts to combat COVID-19 misinformation, the CCDH’s findings paint a different picture. Facebook responded to the report by claiming the sample size was "not representative," and pointed to their broader efforts, stating they had removed hundreds of thousands of posts containing false cures. They further highlighted their use of warning labels on approximately 90 million pieces of content related to COVID-19, claiming these labels deterred 95% of users from viewing the original content. However, the CCDH’s focused study suggests that these measures, while potentially impactful on a larger scale, are failing to address specific instances of misinformation brought directly to their attention.

Twitter’s response was similarly defensive, stating that they prioritize content removal only when it contains a “call to action that could potentially cause harm.” They acknowledged that they would not take action against every tweet containing incomplete or disputed information. While they cited their automated systems challenging over 4.3 million accounts engaging in spammy or manipulative behaviors related to COVID-19, the low removal rate of the specifically reported posts indicates a significant gap in their response. The stark contrast between the platforms’ self-reported efforts and the CCDH’s findings underscores a critical need for greater transparency and accountability in the fight against online misinformation.

The CCDH report has sparked widespread criticism of social media giants, accusing them of failing to adequately address the spread of dangerous falsehoods. Imran Ahmed, the CEO of CCDH, characterized the companies as "shirking their responsibilities," arguing that their reporting and moderation systems are "simply not fit for purpose.” Rosanne Palmer-White, director of the youth action group Restless Development, also expressed concern, highlighting the efforts of young people to counter misinformation while lamenting that social media firms are "letting them down." This sentiment underscores the frustration felt by many who see these platforms as failing to adequately support the efforts of users actively trying to combat the spread of harmful content.

The findings of the CCDH report will add further fuel to the ongoing scrutiny of social media companies by lawmakers. The UK’s Digital Culture Media and Sport sub-committee has already questioned both Twitter and Facebook on their handling of coronavirus misinformation, expressing dissatisfaction with earlier responses and demanding more detailed answers from senior executives. This report will likely strengthen calls for greater regulatory oversight and stricter enforcement of content moderation policies. The pervasiveness of COVID-19 misinformation online poses a serious threat to public health, and the inadequacy of current measures necessitates a more robust and proactive approach from social media platforms to effectively address this challenge.

Share.
Exit mobile version