Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Sarwar Accuses John Swinney of Orchestrating Misinformation Campaign

June 6, 2025

Virginia Restricts Cell Phone Use and Social Media Access in Schools

June 6, 2025

The Emerging Nexus of Misinformation

June 6, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»OpenAI Dismantles Clandestine Chinese-Linked Operations
Disinformation

OpenAI Dismantles Clandestine Chinese-Linked Operations

Press RoomBy Press RoomJune 6, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Chinese Propagandists Leveraging ChatGPT for Social Media Manipulation and Internal Reporting

OpenAI, the leading artificial intelligence research company, has unveiled a concerning trend: Chinese propagandists are increasingly utilizing ChatGPT, their powerful AI chatbot, to craft social media posts, comments, and even internal performance reviews detailing their influence operations. This revelation highlights China’s escalating efforts to shape online narratives and conduct surveillance on a global scale. Researchers at OpenAI discovered that these operations are not limited to public-facing propaganda; they also extend to internal documentation and marketing materials, demonstrating a sophisticated approach to disinformation campaigns.

Over the past three months, OpenAI has disrupted ten malicious operations involving their AI tools and banned the associated accounts. Four of these operations were linked to China, targeting diverse countries, topics, and even online gaming communities. These operations employed a blend of influence tactics, social engineering, and surveillance across various platforms and websites, showcasing a coordinated and multi-faceted approach. The discovery of internal performance reviews generated by ChatGPT underscores the extent to which these groups are integrating AI into their workflows, using it to document their activities and assess their effectiveness.

One notable operation, dubbed "Sneer Review" by OpenAI, utilized ChatGPT to generate comments in English, Chinese, and Urdu on platforms like TikTok, X (formerly Twitter), Reddit, and Facebook. These comments covered a range of topics, including the Trump administration’s policies and a Taiwanese game critical of the Chinese Communist Party. The operation often generated both initial posts and replies, creating an illusion of organic engagement. Furthermore, it leveraged ChatGPT to fabricate critical comments about the game and subsequently write an article falsely claiming widespread backlash.

Beyond generating external propaganda, "Sneer Review" also used ChatGPT for internal purposes, including creating performance reviews detailing the operation’s setup and execution. This internal use of AI highlights the increasing reliance on these tools for both external manipulation and internal management. Another China-linked operation impersonated journalists and geopolitical analysts, employing ChatGPT to craft posts, biographies, translate messages, and analyze data. This operation even targeted correspondence sent to a US Senator regarding an official’s nomination, although OpenAI could not independently verify the authenticity of the communication.

These operations further employed ChatGPT to generate marketing materials promoting their services, openly advertising their capabilities in conducting fake social media campaigns and social engineering for intelligence gathering. This brazen self-promotion underscores a growing confidence in their ability to manipulate online discourse. Previous OpenAI reports have identified other China-linked operations, including one focused on monitoring Western protests for the Chinese security services, showcasing the diverse range of tactics employed.

OpenAI’s report also revealed disruptions of influence campaigns linked to Russia, Iran, a Philippines-based commercial marketing company, a Cambodian recruitment scam, and a deceptive employment campaign with North Korean characteristics. This highlights the global nature of the challenge and the diverse actors exploiting AI tools for malicious purposes. While the detected operations were largely disrupted in their early stages, preventing widespread impact, the sophistication and variety of tactics are alarming.

Despite the advanced tools at their disposal, the operations generally did not achieve significant organic engagement. This suggests that while AI can enhance certain aspects of disinformation campaigns, it doesn’t necessarily guarantee success. The effectiveness of influence operations still relies heavily on human factors, such as crafting compelling narratives and building authentic online communities. OpenAI’s ongoing efforts to identify and disrupt these operations are crucial in mitigating the potential harms of AI-powered disinformation. The company’s commitment to transparency and its proactive approach to identifying and mitigating these threats are vital for maintaining the integrity of online information.

The increasing use of AI for propaganda and manipulation underscores the urgent need for robust detection mechanisms and countermeasures. As AI technology continues to evolve, so too will the tactics of those seeking to exploit it. Continued vigilance and collaboration between researchers, tech companies, and policymakers are essential to safeguard against the escalating threat of AI-powered disinformation campaigns. The fight against online manipulation requires a multi-faceted approach that addresses both the technological advancements and the human actors behind these operations. OpenAI’s research provides valuable insights into the evolving landscape of disinformation and highlights the need for continued efforts to combat this growing threat.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

The Emerging Nexus of Misinformation

June 6, 2025

Ukrainian Embassy in Athens Addresses Disinformation and Propaganda Warfare

June 6, 2025

Russian Disinformation Campaign Expands to TikTok

June 6, 2025

Our Picks

Virginia Restricts Cell Phone Use and Social Media Access in Schools

June 6, 2025

The Emerging Nexus of Misinformation

June 6, 2025

Ukrainian Embassy in Athens Addresses Disinformation and Propaganda Warfare

June 6, 2025

Identifying Misinformation on Social Media: Ten Strategies

June 6, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Russian Disinformation Campaign Expands to TikTok

By Press RoomJune 6, 20250

Russian Disinformation Network Expands Reach to TikTok, Targeting Younger Audiences with Fabricated News Content A…

OpenAI Terminates ChatGPT Accounts Associated with State-Sponsored Cyberattacks and Disinformation Campaigns

June 6, 2025

Leading Through Digital Disinformation

June 6, 2025

Cyabra Report Exposes Targeted Disinformation Campaign

June 6, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.