Mark Zuckerberg’s Pursuit of Engagement Fuels Meta’s Abandonment of Fact-Checking
Mark Zuckerberg, CEO of Meta, prioritizes engagement above all else. This metric, which measures user activity on social media platforms, is directly linked to Meta’s profitability. Zuckerberg’s relentless pursuit of increased engagement has led to a controversial decision: the dismissal of U.S. fact-checkers and a weakening of content moderation efforts across Facebook, Instagram, and Threads. This move, seemingly designed to appease the anticipated Trump administration and boost engagement, raises serious concerns about the proliferation of misinformation on Meta’s platforms.
Studies have consistently demonstrated that false information spreads significantly faster on social media than factual content, especially when it involves sensational or controversial topics such as conspiracy theories, racial prejudice, or incitements to violence. This accelerated spread translates directly to increased engagement and, consequently, higher ad revenue for Meta. The more outlandish and detached from reality a post is, the more likely it is to generate engagement. Zuckerberg’s decision to eliminate fact-checking virtually guarantees that Meta’s platforms will become breeding grounds for misinformation, mirroring the trajectory of Elon Musk’s X (formerly Twitter).
The consequences of this decision are already evident. Misinformation surrounding recent wildfires, for instance, has spread rapidly across Meta’s platforms. Zuckerberg’s justification for dismissing fact-checkers stems from the observation that disclaimers on posts discourage user interaction, which contradicts his overarching goal of maximizing engagement. Experts warn that the absence of fact-checking will lead to a surge in hyper-partisan, hostile, and vitriolic content, further polarizing users and potentially radicalizing those already susceptible to extreme viewpoints.
The proliferation of extreme and misleading content is driven by its ability to evoke strong emotional responses. Such content bypasses logical reasoning and triggers reactions, whether positive or negative. Every interaction, whether a like, share, or comment, contributes to engagement and, ultimately, Meta’s revenue. Even attempts to correct misinformation through comments inadvertently boost the original post’s visibility, as Meta’s algorithm registers the interaction without distinguishing between supportive and critical engagement.
Meta’s history reveals a pattern of prioritizing engagement at almost any cost. A leaked 2016 email from then-Vice President Andrew Bosworth (now Chief Technology Officer) suggested that even inciting suicides and terrorist attacks was an acceptable trade-off for the perceived benefits of connecting users. While Zuckerberg publicly distanced himself from these comments, Bosworth’s subsequent promotion suggests that this philosophy still holds sway within the company.
Similarly, Meta’s continued aggressive marketing towards children and teenagers, despite concerns about the negative impact of excessive social media use on mental health, underscores this profit-driven approach. As Facebook’s user base ages, Meta recognizes the importance of capturing younger demographics who are increasingly reliant on their phones. Whether a user is 12 or 62, ad revenue and engagement remain the primary objectives.
In a different political climate, Zuckerberg’s decision to abandon fact-checking might have faced significant public and political backlash. However, with the potential return of Donald Trump to the presidency, Meta appears to feel emboldened to prioritize engagement over ethical considerations, even if it contributes to the erosion of truth and the spread of misinformation online. This shift raises profound questions about the future of online discourse and the responsibility of social media platforms in safeguarding against the spread of harmful content.