Social Media Giants Fail to Curb COVID-19 Misinformation, Study Finds
A recent report by the Center for Countering Digital Hate (CCDH) reveals the alarming prevalence of COVID-19 misinformation on social media platforms, despite repeated assurances from tech giants that they are actively combating the issue. The study, conducted by ten volunteers, uncovered 649 posts containing false cures, anti-vaccine propaganda, and conspiracy theories, with a staggering 90% remaining online without any warnings or removal after being reported to Facebook and Twitter. The findings raise serious concerns about the efficacy of these platforms’ content moderation policies and their commitment to protecting users from harmful misinformation during a global health crisis.
The discovered posts promoted a range of dangerous falsehoods, including claims that drinking aspirin dissolved in hot water or taking specific vitamin supplements could cure COVID-19. Conspiracy theories linking 5G technology to the virus were also rampant. The persistence of this misinformation online despite being flagged highlights the inadequacy of reporting mechanisms and the lack of proactive measures taken by social media companies to identify and remove harmful content.
Facebook, responding to the report, dismissed the sample size as "not representative" and emphasized their efforts to remove "hundreds of thousands" of posts containing false cures. They also pointed to warning labels placed on approximately 90 million pieces of content related to COVID-19, claiming a 95% reduction in views of the original content as a result. However, the CCDH study directly contradicts these claims, demonstrating that a significant portion of reported misinformation remains accessible and unlabeled.
Twitter, similarly, defended its actions by stating it prioritizes the removal of content containing calls to action that could cause harm. They emphasized their automated systems’ role in challenging over 4.3 million accounts engaged in spammy or manipulative behavior related to COVID-19 discussions. However, like Facebook, they admitted they don’t take action on every tweet containing incomplete or disputed information, an approach that critics argue allows harmful misinformation to spread unchecked. The study’s finding that Twitter acted on only 3% of reported posts underscores the platform’s apparent inaction.
Imran Ahmed, CEO of CCDH, accused the platforms of "shirking their responsibilities," highlighting the inadequacy of their reporting systems and their failure to act even when presented with clear examples of misinformation. This sentiment was echoed by Rosanne Palmer-White, director of youth action group Restless Development, who lamented that young people’s efforts to report harmful content were being undermined by the platforms’ inaction. Both Facebook and Twitter are now facing scrutiny from the UK’s Digital Culture Media and Sport sub-committee regarding their handling of coronavirus misinformation.
The study’s findings come at a critical time when social media platforms are under increasing pressure to combat the spread of misinformation not just about COVID-19 but across a range of topics. The pandemic has exposed the vulnerability of online spaces to manipulation and the urgent need for more robust content moderation policies. The inconsistent application of these policies, the reliance on user reporting, and the failure to proactively identify and remove harmful content all contribute to the persistence of misinformation. As misinformation continues to proliferate, it becomes clear that addressing this challenge requires a concerted effort from social media companies, policymakers, and users alike. The fight against misinformation is not just about COVID-19; it is a battle for the integrity of information online and the protection of public health and safety. The stakes are too high to allow social media companies to continue their current approach of reactive and often ineffective measures. A more proactive, transparent, and accountable approach to content moderation is essential to ensure that these platforms are not weaponized to spread harmful falsehoods.