2020 Election Study: An In-Depth Look at Misinformation and Account Suspensions on Twitter

The 2020 US presidential election was a highly contentious period marked by the proliferation of misinformation on social media platforms like Twitter. To understand the dynamics of misinformation sharing and its potential consequences, researchers conducted a comprehensive study examining the relationship between political orientation, low-quality news dissemination, and account suspensions on Twitter during and after the election. The study involved a multifaceted approach, including data collection from various sources, intricate statistical modeling, and policy simulations to glean insights into the complex interplay of these factors.

The researchers began by collecting a vast dataset of tweets from users who engaged with the election hashtags #Trump2020 and #VoteBidenHarris2020 on October 6, 2020. They also gathered data on the users’ tweeting history, including the domains they shared. This data was carefully filtered to focus on users who shared links from a specific set of news websites previously evaluated for credibility, ensuring a reliable basis for assessing news quality. This initial dataset comprised roughly 9,000 users, balanced between supporters of both presidential candidates. Nine months later, these accounts were revisited to determine whether they had been suspended by Twitter.

Crucially, the study relied on established methods for evaluating the quality of news sources shared by users. Recognizing the infeasibility of fact-checking individual tweets at scale, they utilized existing ratings of news website credibility from professional fact-checkers and politically balanced crowdsourced assessments. These ratings were aggregated into a "low-quality news sharing score" for each user, providing a quantifiable measure of their propensity to share potentially inaccurate information.

Assessing users’ political orientations was another key aspect of the study. Researchers employed a combination of methods, including hashtag usage, analysis of followed accounts, and the ideological leanings of shared news sources. They then combined these measures into an aggregate political orientation score, allowing for a nuanced understanding of users’ ideological positions along a continuous spectrum rather than simply categorizing them into binary groups.

To explore the potential impact of hypothetical suspension policies, researchers simulated different scenarios with varying levels of stringency. This allowed them to estimate the probability of suspension for users based on their low-quality news sharing behavior and gauge the potential for disparate impact on different political groups. These simulations were conducted using both low-quality news sharing and bot-likelihood as potential grounds for suspension.

The study’s scope extended beyond the primary Twitter dataset from the 2020 election. Researchers reanalyzed several existing datasets, including Facebook sharing data from 2016, multiple sets of Twitter data from 2018 to 2023, and datasets focusing on the sharing of false claims and COVID-19 misinformation. These additional datasets provided valuable opportunities for cross-validation and explored similar research questions in different contexts, bolstering the robustness of the findings.

The 2016 Facebook dataset focused on information sharing behavior in the aftermath of the 2016 US election. This dataset, collected via a Facebook app, included user self-reports of political ideology and links shared on the platform, allowing comparisons with the Twitter data. The Twitter datasets from 2018 to 2023 used different sampling methods, including those based on the following of political elites and stratification on follower count, adding depth and breadth to the investigation of misinformation sharing.

Further datasets directly examined the sharing of known false claims and COVID-19 misinformation. The false claims dataset focused on identifying Twitter users who shared specific false or true news headlines, providing a more direct measure of misinformation sharing than relying on news source quality ratings. The COVID-19 dataset gathered sharing intentions for true and false claims across 16 countries, allowing for cross-cultural comparisons of misinformation sharing behavior.

By combining these diverse datasets and employing rigorous methodologies, the study aimed to comprehensively analyze the relationship between political orientation, the spread of misinformation, and the implications of platform policies like account suspensions. These analyses contribute significantly to our understanding of online information ecosystems and their impact on democratic processes. The diverse range of data sources and timeframes offer a robust perspective on the complex challenges of misinformation, paving the way for more informed discussions and potential interventions to address this critical issue.

Share.
Exit mobile version