Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Iranian Influence Operations Pose Threat of Subversion within the UK

July 1, 2025

Indian State Introduces Proposed Legislation for Seven-Year Prison Sentence for Dissemination of False Information

July 1, 2025

Experts Warn of Russian AI-Driven Disinformation Campaign Targeting British Citizens.

July 1, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»Experts Express Heightened Concern Over the Pervasiveness of Political Disinformation.
Disinformation

Experts Express Heightened Concern Over the Pervasiveness of Political Disinformation.

Press RoomBy Press RoomJanuary 3, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

The Disinformation Deluge: Navigating the Murky Waters of AI-Generated Falsehoods

The political landscape has undergone a seismic shift in recent years, grappling with the rise of generative artificial intelligence (AI) and its potential to manipulate public opinion. Deepfakes, cheap fakes, and manipulated media have become commonplace, blurring the lines between reality and fabrication. Voters are increasingly tasked with discerning authentic content from cleverly disguised falsehoods, often facing conflicting narratives about the true impact of AI on society. This burgeoning technological frontier has ignited a crucial conversation, prompting experts and journalists alike to examine the evolving nature of disinformation and its implications for democracy.

A recent panel discussion hosted by PEN America delved into this complex issue, exploring the multifaceted challenges posed by AI-generated disinformation. Moderated by disinformation expert Nina Jankowicz, the panel featured a diverse group of professionals, including Roberta Braga, founder of the Digital Democracy Institute of the Americas; Tiffany Hsu, a disinformation reporter for The New York Times; Brett Neely, supervising editor of NPR’s disinformation reporting team; and Samuel Woolley, a University of Pittsburgh professor and disinformation researcher. Their insights shed light on the growing sophistication of disinformation campaigns, the erosion of trust in institutions, and the urgent need for effective countermeasures.

One of the most pressing concerns highlighted by the panelists was the proliferation of increasingly complex disinformation campaigns. Foreign influence operations, coupled with a decline in content moderation on social media platforms, have created a fertile ground for the spread of false narratives. Elon Musk’s takeover of Twitter, now rebranded as X, and the subsequent dismantling of trust and safety teams have exacerbated this problem. Similar trends at other tech giants like Google and Meta have further weakened safeguards against online manipulation, leaving users vulnerable to a barrage of misleading information.

The nature of disinformation itself is also evolving. False narratives are becoming more personalized, targeted, and difficult to detect. Social media influencers are increasingly co-opted to disseminate hyper-partisan content, often without disclosing their affiliations. This insidious tactic blurs the lines between genuine opinion and paid promotion, further muddying the waters of online discourse. Roberta Braga emphasized the prevalence of decontextualized information and the manipulation of small truths to create misleading narratives, particularly appealing to those already predisposed to conspiracy theories.

The pervasive nature of disinformation has a corrosive effect on societal trust. While heightened awareness can encourage critical thinking, it can also fuel skepticism towards credible information sources. Brett Neely argued that propaganda often aims to sow cynicism and erode faith in institutions, discouraging public participation in the political process. This "liar’s dividend," as it’s often called, allows those with something to hide to exploit public distrust and obfuscate the truth. Donald Trump’s false claim about Vice President Kamala Harris’s crowd sizes being AI-generated exemplifies this tactic, aiming to undermine her support and sow doubt about potential electoral victories.

Despite the alarming rise of AI-generated disinformation, some experts argue that the panic is overblown. Samuel Woolley characterized this perspective as a backlash against the initial wave of alarmist predictions. He stressed the importance of nuanced analysis, acknowledging the difficulty of measuring the real-world impact of disinformation with scientific precision. Tiffany Hsu highlighted the emotional responses often triggered by new technologies, citing the example of false claims about Haitian immigrants in Ohio. While easily debunked, the sheer volume of these claims contributes to a sense of trivialization, undermining trust in the information ecosystem and legitimizing harmful stereotypes.

Roberta Braga argued that while AI itself may not be a revolutionary tool for manipulating individual beliefs, it can amplify existing manipulative tactics. Fear-mongering, cherry-picking, and emotional language remain potent tools for spreading disinformation, particularly when exploiting pre-existing prejudices. She highlighted the growing skepticism towards institutions, particularly among minority communities, where distrust of elites can be exploited to spread narratives about corporate influence and the futility of political participation.

The panelists acknowledged the challenges of combating this evolving threat. Every fact-check can feel like a small victory in a larger war. Tiffany Hsu emphasized the need for transparency in journalism, advocating for clear explanations of fact-checking processes to build public trust. She stressed the meticulous efforts required to verify information and the importance of conveying both what is known and what remains unknown. This transparency is crucial for bridging the gap between journalists and their audience, fostering a shared understanding of the challenges posed by disinformation.

Despite the daunting task ahead, the panelists expressed hope for effective interventions. Learning from global efforts to combat disinformation and fostering resilience against online falsehoods are crucial steps. Samuel Woolley underscored the power of interpersonal relationships in countering misinformation, highlighting the influence of trusted individuals in shaping beliefs. Ultimately, building resilience against disinformation requires collective action, leveraging the strength of social connections to combat the spread of falsehoods and promote informed decision-making.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Experts Warn of Russian AI-Driven Disinformation Campaign Targeting British Citizens.

July 1, 2025

The Impact of AI-Driven Disinformation on the Upcoming Election

July 1, 2025

Brazilian Ambassador Condemns Disinformation Campaign Targeting Mercosur Agreement

July 1, 2025

Our Picks

Indian State Introduces Proposed Legislation for Seven-Year Prison Sentence for Dissemination of False Information

July 1, 2025

Experts Warn of Russian AI-Driven Disinformation Campaign Targeting British Citizens.

July 1, 2025

Australia Holds Social Media Companies Accountable for Misinformation

July 1, 2025

The Dissemination of Misinformation Regarding Transgender Healthcare and Its Influence on Progressive Ideology.

July 1, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Social Media Impact

Sprout Social Achieves Industry Leadership with 164 G2 Leader Awards in Social Media Management.

By Press RoomJuly 1, 20250

Sprout Social Dominates Summer 2025 Software Awards, Solidifying Leadership in Social Media Management CHICAGO –…

Fact Check: Debunking False Reports of Nationwide Traffic Law Changes on Websites and Social Media

July 1, 2025

Mitigating Online Disinformation and AI Threats: Guidance for Electoral Candidates and Officials

July 1, 2025

Government Project Selects Originator Profile Development to Combat AI-Generated Misinformation

July 1, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.