Social Media Platforms Grapple with Misinformation Ahead of 2020 US Election
The 2020 US presidential election is unfolding under the shadow of misinformation, a potent threat that looms large over American democracy. Four years ago, foreign interference through social media played a significant role in the election, and experts warn that this year the risks are even greater. The proliferation of conspiracy theories like QAnon, a president who frequently utilizes social media to spread falsehoods and attack opponents, and a federal government that has taken minimal action to combat online election interference, have all contributed to this heightened concern. As a result, social media platforms like Facebook, Twitter, and YouTube find themselves under immense pressure to address these challenges and safeguard the integrity of the election process.
Recognizing their pivotal role, major tech companies have announced a range of initiatives to combat misinformation. These include altering algorithmic recommendations, restricting the sharing of false information, and labeling misleading content. However, the timing of these changes, implemented as early voting is already underway, has raised doubts about their effectiveness. Critics point to the lack of transparency surrounding the application of these policies and express concern about the immense power concentrated in the hands of a few tech companies.
Facebook, in particular, has instituted a multitude of new policies. These include a ban on content aimed at voter intimidation, prominent placement of voter information panels, the creation of a “voter information center,” a ban on new political advertising in the week leading up to the election, and the labeling of premature victory claims and misleading posts from public figures. While some applaud these efforts, others argue that they are “too little, too late,” citing Facebook’s past failures to effectively address hate speech and misinformation.
Twitter has also taken steps to combat misinformation. The platform has launched an elections “hub” featuring news from reputable sources, prohibited premature victory claims, and implemented a new feature that requires users to read articles before retweeting them. Experts believe these measures are likely to have a positive impact, though concerns remain about the subjective nature of labeling misleading posts and the potential inconvenience of the pre-retweet reading requirement.
YouTube, despite being a breeding ground for misinformation and conspiracy theories, has implemented fewer new policies specific to the 2020 election. The platform has emphasized its existing policies regarding manipulated content, hacked information, and interference with democratic processes, and has introduced a voting information panel. Critics argue that YouTube should adopt more proactive measures, similar to those implemented by Twitter, to limit the spread of misinformation through algorithmic recommendations.
TikTok, the Chinese-owned video-sharing app, has also taken action to combat election-related misinformation. The company has updated its policies, partnered with the Department of Homeland Security to protect against foreign influence, and collaborated with third-party fact-checking organizations to verify information. Like other platforms, TikTok has also launched an in-app guide to the 2020 elections.
The efficacy of these measures remains to be seen. While the platforms have made significant changes, critics argue that more needs to be done to address the underlying issues that contribute to the spread of misinformation. The lack of transparency regarding how these policies are applied and the potential for bias in content moderation also pose significant challenges. As Americans cast their ballots, the role of social media in shaping public discourse and influencing the outcome of the election will be closely scrutinized. The future of American democracy may well depend on the ability of these platforms to effectively combat the spread of misinformation.