Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Investigation Launched into Source of Screwworm Misinformation Impacting Missouri Agriculture

June 6, 2025

Southern California Air Quality Standards Weakened Amidst Industry Pressure

June 6, 2025

The Intersection of Pride and Misinformation

June 6, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Social Media»OpenAI Terminates ChatGPT Accounts Associated with State-Sponsored Cyberattacks and Disinformation Campaigns
Social Media

OpenAI Terminates ChatGPT Accounts Associated with State-Sponsored Cyberattacks and Disinformation Campaigns

Press RoomBy Press RoomJune 6, 2025
Facebook Twitter Pinterest LinkedIn Tumblr Email

State-Sponsored Actors Leveraging ChatGPT for Malicious Purposes: A Deep Dive into OpenAI’s Latest Findings

OpenAI’s recent report unveils a concerning trend of state-backed threat actors exploiting ChatGPT for illicit activities, ranging from refining malware to orchestrating disinformation campaigns and perpetrating employment scams. The report highlights the involvement of actors linked to China, Russia, North Korea, Iran, and the Philippines, as well as suspected criminal groups in Cambodia. These malicious uses of ChatGPT fall into three primary categories: social media manipulation, malware development and cyberattack assistance, and foreign employment scams, with a significant portion attributed to China-based actors.

Social Media Manipulation: A Tool for Spreading Disinformation and Influencing Public Opinion

OpenAI identified numerous accounts, predominantly linked to China, engaging in coordinated social media manipulation campaigns. These accounts utilized ChatGPT to generate comments and posts in multiple languages, including English, Chinese, and Urdu, targeting platforms such as TikTok, X (formerly Twitter), Reddit, Facebook, and others. The content often revolved around divisive political topics, including criticism of US foreign policy, Taiwan, and Pakistani activists critical of China’s investments. The campaigns frequently employed a tactic of creating initial comments and then using separate accounts to reply, creating an illusion of organic engagement. While these efforts largely failed to gain significant traction, they highlight the potential for abuse of AI language models in influencing online discourse. Russian and Iranian actors also engaged in similar activities, with the former targeting the German federal elections and criticizing NATO and the US, and the latter focusing on various geopolitical themes. Additionally, accounts based in the Philippines were found promoting the policies of President Bongbong Marcos.

Malware Refinement and Cyberattack Assistance: Enhancing Malicious Capabilities with AI

Beyond social media manipulation, state-sponsored hacking groups, including APT5 and APT15 (Keyhole Panda and Vixen Panda respectively), were identified leveraging ChatGPT for malware development and cyberattack assistance. These actors used the platform to generate code for brute-forcing passwords, scanning servers, conducting AI-driven penetration testing, and automating social media operations. They sought information related to US defense infrastructure, satellite communications, and government technology, demonstrating a clear intent to gather sensitive information for potential future attacks. While OpenAI asserts that access to ChatGPT did not provide these actors with novel capabilities, the platform’s assistance in automating and streamlining tasks could potentially accelerate their malicious operations. Russian actors were also observed utilizing ChatGPT to develop and refine Windows malware, demonstrating a similar pattern of leveraging the platform for malicious code development. This malware, dubbed “ScopeCreep”, was reportedly used to infect video game players, enabling hackers to escalate privileges, steal credentials, and communicate through Telegram. OpenAI’s swift action in banning associated accounts and collaborating with industry partners helped mitigate the impact of this threat.

Employment Scams: Leveraging ChatGPT for Deception and Exploitation

North Korean threat actors employed ChatGPT extensively to generate fake resumes and personas for applying to jobs, as part of a broader IT worker scheme. They also used the platform to research remote-work setups and techniques for circumventing corporate security measures, aiming to establish a persistent, undetected remote presence within targeted organizations. This activity suggests a sophisticated operation leveraging AI to create convincing fake identities and gain access to sensitive information. Furthermore, accounts originating from Cambodia were identified generating recruitment messages in multiple languages, offering high salaries for trivial tasks. This activity aligns with the known cyber scam operations prevalent in Cambodia, where individuals are often trafficked and forced to participate in online fraud schemes. The use of ChatGPT to create multilingual recruitment messages demonstrates the adaptability of these criminal enterprises and their willingness to exploit new technologies for their illicit purposes.

OpenAI’s Response and Mitigation Efforts

OpenAI has taken proactive steps to mitigate these threats, banning associated accounts and sharing relevant indicators with industry partners. The company’s investigations have provided valuable insights into the operational workflows of these threat actors, shedding light on their tool development, research methods, and infrastructure profiling. While OpenAI maintains that access to their models did not grant these actors novel capabilities, the identified malicious uses underscore the need for ongoing monitoring and enhanced security measures to prevent the misuse of AI language models.

The Broader Implications and Future Challenges

The findings presented in OpenAI’s report highlight the evolving landscape of cyber threats and the increasing adoption of advanced technologies like AI by malicious actors. As AI language models become more sophisticated and accessible, the potential for misuse will only grow, necessitating a collaborative effort between AI developers, cybersecurity experts, and policymakers to develop robust safeguards and mitigation strategies. The ability of these models to generate realistic text, translate languages, and automate tasks makes them powerful tools for deception and manipulation, posing significant challenges for online security and information integrity. Continued research and development of detection methods and preventative measures are crucial to staying ahead of these evolving threats.

The Need for Vigilance and Collaboration

The incidents detailed in OpenAI’s report serve as a stark reminder of the potential for misuse of AI technology. While the platform’s developers have taken steps to address these threats, the ongoing evolution of AI-powered tools necessitates continuous vigilance and proactive collaboration between tech companies, security researchers, and government agencies. This collaborative approach is crucial to developing effective strategies for detecting and mitigating future threats, ensuring the responsible development and deployment of AI technology. The ongoing cat-and-mouse game between malicious actors and security professionals underscores the need for a dynamic and adaptive approach to cybersecurity in the age of AI. The development of ethical guidelines and robust security protocols is paramount to preventing the misuse of AI and safeguarding against the potential harms it poses.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Identifying Misinformation on Social Media: Ten Strategies

June 6, 2025

Disinformation Campaign Targeting Target’s DEI Initiatives Revealed in Cyabra Report, Featured in USA Today

June 6, 2025

The BJP’s Dominance of Social Media

June 6, 2025

Our Picks

Southern California Air Quality Standards Weakened Amidst Industry Pressure

June 6, 2025

The Intersection of Pride and Misinformation

June 6, 2025

Investigation Launched into Source of Screwworm Misinformation Impacting Missouri Agriculture

June 6, 2025

Integrating Social Media into SEO Strategies

June 6, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

China and Pakistan Propagate Disinformation Regarding Rafale Jets Following Indian Counterterrorism Operations

By Press RoomJune 6, 20250

Rafale Under Fire: Disinformation Campaign Targets India’s Fighter Jet New Delhi – A sophisticated disinformation…

Social Media Dissemination of Cancer Misinformation by So-Called Influencers

June 6, 2025

Macpherson Alleges Disinformation Campaign Targeting IDT Board to Facilitate Malfeasance

June 6, 2025

Sarwar Accuses John Swinney of Orchestrating Misinformation Campaign

June 6, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.