Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

UN Secretary-General Warns of Escalating Global Peril Due to Conflict, Mistrust, and Disinformation

July 31, 2025

Misinformation Pervades the 15-Minute City Concept

July 31, 2025

Cease Dissemination of Misinformation Regarding the Jerry Boshoga Case

July 31, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»OpenAI Dismantles Clandestine Chinese-Linked Operations
Disinformation

OpenAI Dismantles Clandestine Chinese-Linked Operations

Press RoomBy Press RoomJune 6, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

Chinese Propagandists Leveraging ChatGPT for Social Media Manipulation and Internal Reporting

OpenAI, the leading artificial intelligence research company, has unveiled a concerning trend: Chinese propagandists are increasingly utilizing ChatGPT, their powerful AI chatbot, to craft social media posts, comments, and even internal performance reviews detailing their influence operations. This revelation highlights China’s escalating efforts to shape online narratives and conduct surveillance on a global scale. Researchers at OpenAI discovered that these operations are not limited to public-facing propaganda; they also extend to internal documentation and marketing materials, demonstrating a sophisticated approach to disinformation campaigns.

Over the past three months, OpenAI has disrupted ten malicious operations involving their AI tools and banned the associated accounts. Four of these operations were linked to China, targeting diverse countries, topics, and even online gaming communities. These operations employed a blend of influence tactics, social engineering, and surveillance across various platforms and websites, showcasing a coordinated and multi-faceted approach. The discovery of internal performance reviews generated by ChatGPT underscores the extent to which these groups are integrating AI into their workflows, using it to document their activities and assess their effectiveness.

One notable operation, dubbed "Sneer Review" by OpenAI, utilized ChatGPT to generate comments in English, Chinese, and Urdu on platforms like TikTok, X (formerly Twitter), Reddit, and Facebook. These comments covered a range of topics, including the Trump administration’s policies and a Taiwanese game critical of the Chinese Communist Party. The operation often generated both initial posts and replies, creating an illusion of organic engagement. Furthermore, it leveraged ChatGPT to fabricate critical comments about the game and subsequently write an article falsely claiming widespread backlash.

Beyond generating external propaganda, "Sneer Review" also used ChatGPT for internal purposes, including creating performance reviews detailing the operation’s setup and execution. This internal use of AI highlights the increasing reliance on these tools for both external manipulation and internal management. Another China-linked operation impersonated journalists and geopolitical analysts, employing ChatGPT to craft posts, biographies, translate messages, and analyze data. This operation even targeted correspondence sent to a US Senator regarding an official’s nomination, although OpenAI could not independently verify the authenticity of the communication.

These operations further employed ChatGPT to generate marketing materials promoting their services, openly advertising their capabilities in conducting fake social media campaigns and social engineering for intelligence gathering. This brazen self-promotion underscores a growing confidence in their ability to manipulate online discourse. Previous OpenAI reports have identified other China-linked operations, including one focused on monitoring Western protests for the Chinese security services, showcasing the diverse range of tactics employed.

OpenAI’s report also revealed disruptions of influence campaigns linked to Russia, Iran, a Philippines-based commercial marketing company, a Cambodian recruitment scam, and a deceptive employment campaign with North Korean characteristics. This highlights the global nature of the challenge and the diverse actors exploiting AI tools for malicious purposes. While the detected operations were largely disrupted in their early stages, preventing widespread impact, the sophistication and variety of tactics are alarming.

Despite the advanced tools at their disposal, the operations generally did not achieve significant organic engagement. This suggests that while AI can enhance certain aspects of disinformation campaigns, it doesn’t necessarily guarantee success. The effectiveness of influence operations still relies heavily on human factors, such as crafting compelling narratives and building authentic online communities. OpenAI’s ongoing efforts to identify and disrupt these operations are crucial in mitigating the potential harms of AI-powered disinformation. The company’s commitment to transparency and its proactive approach to identifying and mitigating these threats are vital for maintaining the integrity of online information.

The increasing use of AI for propaganda and manipulation underscores the urgent need for robust detection mechanisms and countermeasures. As AI technology continues to evolve, so too will the tactics of those seeking to exploit it. Continued vigilance and collaboration between researchers, tech companies, and policymakers are essential to safeguard against the escalating threat of AI-powered disinformation campaigns. The fight against online manipulation requires a multi-faceted approach that addresses both the technological advancements and the human actors behind these operations. OpenAI’s research provides valuable insights into the evolving landscape of disinformation and highlights the need for continued efforts to combat this growing threat.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

UN Secretary-General Warns of Escalating Global Peril Due to Conflict, Mistrust, and Disinformation

July 31, 2025

AI-Generated Voice Clone of 999 Operator Used in Alleged Russian Disinformation Campaign

July 31, 2025

AI-Generated Voice Clone of Emergency Operator Deployed in Russian Disinformation Campaign

July 31, 2025

Our Picks

Misinformation Pervades the 15-Minute City Concept

July 31, 2025

Cease Dissemination of Misinformation Regarding the Jerry Boshoga Case

July 31, 2025

AI-Generated Voice Clone of 999 Operator Used in Alleged Russian Disinformation Campaign

July 31, 2025

US Artificial Intelligence Action Plan Redefines Approach to Misinformation and Diversity, Equity, and Inclusion

July 31, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

AI-Generated Voice Clone of Emergency Operator Deployed in Russian Disinformation Campaign

By Press RoomJuly 31, 20250

AI-Cloned Voice of 999 Call Handler Used in Sophisticated Russian Disinformation Operation In a disturbing…

Susan Monarez Appointed CDC Director Amidst Challenges of Vaccine Misinformation and Ongoing Ukraine Conflict

July 31, 2025

UN Secretary-General Warns of Escalating Global Threats from Conflict, Mistrust, and Disinformation

July 31, 2025

Government Unveils Framework to Combat Misinformation Across Media Platforms

July 31, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.