Close Menu
DISADISA
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
Trending Now

Cognitive Style Associated with Health Misinformation Detection

September 16, 2025

Online Harassment of Television Meteorologists: An Emerging Concern

September 16, 2025

Russian Influence Operations: Exploiting Cash-for-Votes Schemes, Deepfakes, and Propaganda

September 16, 2025
Facebook X (Twitter) Instagram
Facebook X (Twitter) Instagram YouTube
DISADISA
Newsletter
  • Home
  • News
  • Social Media
  • Disinformation
  • Fake Information
  • Social Media Impact
DISADISA
Home»Disinformation»Disinformation Bot Networks Disrupted by Internet Outages During Iran-Israel Conflict: Report
Disinformation

Disinformation Bot Networks Disrupted by Internet Outages During Iran-Israel Conflict: Report

Press RoomBy Press RoomJuly 14, 2025No Comments
Facebook Twitter Pinterest LinkedIn Tumblr Email

Unveiling a Covert Cyber Army: Iran’s Disinformation Network Disrupted by Israeli Strikes

In a surprising twist, Israeli airstrikes targeting Iran in June 2024 not only impacted physical infrastructure but also exposed a covert online operation aimed at manipulating British political discourse. Cyabra, a disinformation detection firm, uncovered a network of approximately 1,300 bot accounts masquerading as British citizens, actively engaging in online discussions related to Scottish independence, Brexit, and alleged institutional collapse. This network, operational since May, abruptly went silent for 16 days following the Israeli strikes, providing a rare opportunity to analyze its mechanics and impact.

The network’s sudden disappearance coincided with disruptions to Iranian communication infrastructure, strongly suggesting a direct link between the two events. Before vanishing, the bots had already reached over 200 million people through more than 3,000 posts, amplifying divisive narratives and attempting to influence public opinion. The accounts employed AI-generated personas, mimicking authentic user behavior by retweeting, liking, and replying in staggered waves to avoid detection. This sophisticated approach allowed them to blend seamlessly into genuine political conversations, amplifying pre-existing tensions within British society.

Upon its reactivation after the 16-day hiatus, the network’s tone shifted dramatically. Instead of their previous focus on domestic British issues, the bots began disseminating pro-Iranian propaganda and deriding Western leaders. This abrupt change further solidified the connection between the network’s operations and Iranian state control, offering compelling evidence of state-sponsored online interference. Cyabra CEO Dan Brahmy described the incident as akin to “watching state-backed disinformation self-destruct in real time,” revealing the strategy, propaganda, and massive reach of the Iranian campaign.

The scale of the operation was startling. Cyabra’s analysis indicated that approximately 26% of the accounts involved in Scottish independence discussions on X (formerly Twitter) were fake – a figure significantly higher than the platform’s norm. The bots operated as a self-reinforcing cluster, boosting each other’s posts to create an illusion of grassroots consensus and manipulate online algorithms. This tactic aimed to amplify polarizing messages, exacerbating existing divisions within British political discourse while simultaneously presenting Iran as a beacon of unity and resistance.

Cyabra’s investigation revealed various tactics employed by the network. Many accounts recycled existing content, utilized identical phrasing, and engaged in coordinated bursts of activity using hashtags like #FreeScotland, #BrexitBetrayal, and #ScottishIndependence. This strategic deployment of hashtags allowed them to inject state-aligned messaging into organic online conversations. By mimicking authentic user behavior, the bots successfully evaded initial detection. However, the 16-day blackout, coupled with the subsequent shift in messaging, provided conclusive evidence of centralized command and control.

The post-blackout content took on a distinctly geopolitical tone, openly promoting Iranian interests and attacking Western entities. One account shared a cartoon depicting Israelis as rats fleeing an Iranian eagle, linking Iranian “national unity” to the pursuit of Scottish independence from the “outdated British monarchy.” Another post urged Scotland to emulate Iran’s supposed triumph over “two nuclear superpowers” to achieve independence. A third post featured an inflammatory image mocking Israel’s Iron Dome defense system. These blatant displays of pro-Iranian sentiment contrasted sharply with the network’s previous attempts to blend into British online discourse.

Military officials have suggested that the bot operation might be part of a broader collaborative effort involving Russia, a nation well-versed in digital influence warfare. Colonel Philip Ingram, a former British military intelligence officer, noted similarities between the Iranian network’s tactics and those typically associated with Russian disinformation campaigns. He warned of a “huge” threat posed by such joint operations, highlighting potential connections to other geopolitical events like the Hamas attack on Israel. This theory aligns with previous instances of cooperation between Iran and Russia in the information sphere, notably in the aftermath of the Hamas attack.

The discovery of the Iranian bot network highlights the growing threat of state-sponsored online manipulation. Cyabra, a for-profit company with connections to prominent figures like Mike Pompeo and Elon Musk, has played a significant role in exposing such operations. Their analysis of social media activity following the Hamas attack revealed that a substantial portion of pro-Hamas and anti-Israel content originated from fake accounts, demonstrating the widespread use of disinformation tactics to shape public perception and exacerbate geopolitical tensions. This incident underscores the importance of robust disinformation detection and mitigation efforts to safeguard the integrity of online information and democratic processes.

Share. Facebook Twitter Pinterest LinkedIn WhatsApp Reddit Tumblr Email

Read More

Cognitive Style Associated with Health Misinformation Detection

September 16, 2025

Russian Influence Operations: Exploiting Cash-for-Votes Schemes, Deepfakes, and Propaganda

September 16, 2025

MTN Launches Pan-African Program to Combat Misinformation

September 16, 2025
Add A Comment
Leave A Reply Cancel Reply

Our Picks

Online Harassment of Television Meteorologists: An Emerging Concern

September 16, 2025

Russian Influence Operations: Exploiting Cash-for-Votes Schemes, Deepfakes, and Propaganda

September 16, 2025

Australian Climate Risk Assessment Fuels Spread of Misinformation.

September 16, 2025

MTN Launches Pan-African Program to Combat Misinformation

September 16, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo

Don't Miss

Disinformation

Russian Disinformation and Its Erosion of Democratic Trust in Moldova

By Press RoomSeptember 16, 20250

Moldova: A Battleground for Disinformation and the Erosion of Trust The struggle for democracy in…

French Commission Recommends Prohibiting Social Media Access for Children Under Fifteen Due to Deleterious Effects.

September 16, 2025

FEMA Official Resigns Following Dismissal of Staff Over Remarks Concerning Charlie Kirk

September 16, 2025

Assessing the Impact of Misinformation and Disinformation on Achieving the Sustainable Development Goals within the Global Digital Compact Framework.

September 16, 2025
DISA
Facebook X (Twitter) Instagram Pinterest
  • Home
  • Privacy Policy
  • Terms of use
  • Contact
© 2025 DISA. All Rights Reserved.

Type above and press Enter to search. Press Esc to cancel.