Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Enhancing Disinformation Detection While Maintaining Public Trust

June 17, 2025

Immersive “Storehouse” Exhibit on Misinformation Suffers from Its Own Lack of Clarity (Financial Times Review)

June 17, 2025

Legal Frameworks for Addressing Online Disinformation

June 17, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Fake Information»Foreign Influence Operations on Social Media: Manipulation and Impact on Public Perception
Fake Information

Foreign Influence Operations on Social Media: Manipulation and Impact on Public Perception

Press RoomBy Press RoomJune 17, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

The Looming Threat of Foreign Influence Campaigns in the 2024 US Presidential Election

The 2024 US presidential election is rapidly approaching, and with it comes a surge in foreign influence campaigns, also known as information operations. These sophisticated endeavors aim to manipulate public opinion, disseminate false narratives, and alter the behavior of target populations. Foreign actors, including Russia, China, Iran, Israel, and others, are leveraging a range of tools, from social bots and influencers to media companies and the rapidly evolving power of generative AI, to achieve their objectives. The stakes are high, as these campaigns pose a significant threat to the integrity of the democratic process.

Researchers at the Indiana University Observatory on Social Media are at the forefront of studying these influence campaigns and developing countermeasures. Their work focuses on identifying "inauthentic coordinated behavior," which involves detecting clusters of social media accounts exhibiting suspicious patterns of activity. These patterns include synchronized posting, coordinated amplification of specific users, sharing identical content, and performing similar sequences of actions. This research has uncovered alarming tactics, such as accounts flooding platforms with hundreds of thousands of posts in a single day, manipulating the algorithms that govern trending topics and user feeds. These campaigns often employ a tactic of deleting their content after achieving their objective, making detection and tracing a complex challenge.

The Threat Beyond Traditional Adversaries: Generative AI and the Rise of Sophisticated Manipulation

While nations like Russia, China, and Iran are known actors in the information warfare arena, they are not the only foreign governments attempting to influence US politics through social media manipulation. The emergence of generative AI has significantly amplified the threat, providing malicious actors with powerful new tools to create and manage vast networks of fake accounts. Researchers have identified thousands of accounts using AI-generated profile pictures, engaged in spreading scams, disseminating spam, and amplifying coordinated messages. The scale of this activity is staggering, with estimates suggesting at least 10,000 such accounts operating daily on platforms like X (formerly Twitter), even before significant reductions in the platform’s trust and safety teams.

The use of generative AI extends beyond creating fake profiles. Networks of bots leveraging ChatGPT and other large language models have been identified generating human-like content, promoting fake news websites, and perpetuating cryptocurrency scams. These bots engage in sophisticated interactions, replying to posts, retweeting content, and blurring the lines between human and machine activity. Current detection methods are struggling to keep pace with the rapid advancements in generative AI, making it increasingly difficult to distinguish between authentic users and AI-powered bots.

Measuring the Impact and Modeling Manipulation Tactics

Quantifying the real-world impact of these influence operations is challenging. Ethical considerations limit the ability to conduct experiments that could potentially manipulate online communities. As a result, the extent to which these campaigns can sway election outcomes remains uncertain. However, researchers are developing innovative methods to understand societal vulnerability to these manipulation tactics. The SimSoM model, developed at Indiana University, simulates the spread of information on social media platforms, incorporating key elements like follower networks, feed algorithms, sharing mechanisms, and content quality metrics.

This model allows researchers to simulate scenarios involving malicious actors spreading low-quality information, such as disinformation, conspiracy theories, and harmful content. By measuring the quality of information that targeted users are exposed to, researchers can assess the effectiveness of various manipulation tactics. Simulations have revealed that "infiltration," where fake accounts build relationships with genuine users, is a highly effective tactic, significantly reducing the quality of content within the network. Combining infiltration with "flooding," the high-volume posting of low-quality but engaging content, further amplifies the negative impact, drastically lowering the quality of information users encounter.

Combating Coordinated Manipulation: Platforms, Regulation, and User Empowerment

The rise of generative AI has dramatically lowered the cost and effort required for malicious actors to launch sophisticated influence operations. These actors can now effortlessly create and manage vast networks of believable accounts, engage in continuous interaction with real users, and generate large volumes of engaging but harmful content. The observed tactics of infiltration and flooding are being widely employed to manipulate user feeds and spread deceptive information.

Combating this threat requires a multi-pronged approach. Social media platforms must enhance content moderation efforts to identify and disrupt manipulation campaigns, bolstering user resilience against these tactics. This includes making it more difficult to create fake accounts, limiting automated posting, and challenging accounts exhibiting suspicious activity. Furthermore, platforms should educate users about the risks of deceptive AI-generated content and encourage the sharing of accurate information.

Regulation also has a crucial role to play. Given the open-source nature of many AI models and data, regulating content generation alone is insufficient. Instead, regulations should focus on content dissemination, requiring platforms to verify the accuracy or provenance of content before it reaches a large audience. These measures would protect free speech by ensuring that authentic voices and opinions are not drowned out by coordinated manipulation campaigns, which can effectively act as a form of censorship by limiting the visibility of genuine discourse. The right to free speech does not equate to a right of unfettered exposure, and protecting the integrity of online discourse requires proactive efforts to combat manipulation and ensure a level playing field for all voices.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Taliban Outlaw Use of Fake Social Media Accounts

June 17, 2025

An Expert Analysis of Fake News and Social Media’s Impact in the Philippines

June 17, 2025

Curaçao Airport Issues Warning Regarding Fraudulent Social Media Activity

June 16, 2025

Our Picks

Immersive “Storehouse” Exhibit on Misinformation Suffers from Its Own Lack of Clarity (Financial Times Review)

June 17, 2025

Legal Frameworks for Addressing Online Disinformation

June 17, 2025

Dissemination of Misinformation Following the Death of a Minnesota Lawmaker

June 17, 2025

Countering Disinformation Targeting the LGBTQ+ Community

June 17, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Deciphering Putin’s True Objectives Beyond Nuclear Threat Rhetoric

By Press RoomJune 17, 20250

Russia’s Non-Nuclear Threat to the UK: A Multifaceted Approach While the specter of nuclear war…

Ukrainian Provocation in the Baltic Sea: Disinformation Center Refutes Russian Allegations

June 17, 2025

Legal Frameworks for Addressing Online Disinformation

June 17, 2025

Misinformation in the Mainstream Media: A Critique by Peter Menzies

June 17, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.