Iranian Influence Campaign Resurfaces on Israeli Social Media with Enhanced Tactics
A sophisticated foreign network operating under suspected Iranian influence has reemerged on Israeli social media platforms after a brief hiatus, employing more advanced tactics to sow discord and manipulate public opinion. Initially exposed by Ynet and flagged by disinformation watchdog FakeReporter, the network utilizes meticulously crafted fake profiles designed to evade detection, amplifying divisive narratives, inciting hatred, and spreading demoralization within Israeli society. Their activities range from fueling anti-ultra-Orthodox sentiment to encouraging violent street protests and even promoting refusal to enlist in the Israel Defense Forces (IDF).
This resurgence marks a concerning escalation in the network’s operations. Previously, FakeReporter identified approximately 60 profiles, primarily on Facebook, responsible for over 18,000 posts. These accounts, operating as coordinated bots, employed fabricated identities to blend seamlessly into online communities. After a period of inactivity, during which some profiles disappeared, they have returned under new aliases and refined personas, further complicating efforts to identify and counter their influence. This tactic of shedding compromised identities and adopting new ones allows the network to maintain its established connections and continue spreading its message while avoiding the repercussions of exposure.
The network’s deceptive practices involve the use of stolen and fabricated information to create convincing backstories for these fake profiles. Examples include the reappearance of “Daniel Oz” as “David Abraham,” and “Aliza Ariel” – previously noted for her suspicious tagline – returning under the same name. Another profile, “Shira Levi,” now claims to be a doctor. Other suspect accounts include “Avraham Moshe,” “Keren Ovadya,” and “Maya Lee,” who changed her name to “Maya Lipschitz” after being linked to other fake profiles. Her profile picture, it was discovered, belongs to a Canadian high school student. The constantly shifting identities and fabricated details make it increasingly difficult for users to discern genuine accounts from malicious actors.
One particularly alarming tactic involves directing unsuspecting users to external platforms rife with propaganda. The profile "Rebecca Elia," which frequently changes its associated photographs, directs users to a Telegram group called "Patriotic Israelis." This group, created by a fake profile, disseminates Iranian propaganda and inflammatory content alongside genuine posts from unsuspecting Israelis who have been lured into the group. This strategy effectively mixes genuine discourse with malicious propaganda, blurring the lines and increasing the likelihood of the propaganda being accepted as legitimate viewpoints.
The operators of this network demonstrate a clear understanding of social media dynamics and employ sophisticated techniques to maximize their impact. According to FakeReporter, the temporary deactivation or locking of profiles, coupled with the deletion of identifying photos and replacement with new identities, serves a strategic purpose. These profiles, having established a fabricated history and amassed connections with real Israeli users, represent valuable assets for the Iranian operators. The existing network of connections allows for the seamless dissemination of hostile and inflammatory messages, amplifying content that advances the network’s agenda. By retaining the previous posts and connections while changing the profile details, the network maintains its established reach and influence.
Ahiya Schatz, CEO of FakeReporter, emphasizes the gravity of this situation, highlighting the successful infiltration of Israeli online communities by foreign elements engaged in espionage, incitement, and propaganda dissemination. He criticizes Meta, Facebook’s parent company, for its inadequate response to this issue, noting the lack of user alerts regarding these significant profile changes and the absence of tools to protect against such manipulation. Schatz contrasts this with the stricter regulations in the European Union, where Facebook would be obligated to inform the public about such activities. He argues that both the Israeli government and Meta have the capacity to combat this phenomenon but have chosen not to, leaving the Israeli public vulnerable to online manipulation.
The sophisticated use of artificial intelligence (AI) further enhances the effectiveness of these fake profiles. AI enables the bots to communicate in fluent Hebrew, process profile pictures to obscure their origins, and create compelling graphics and distorted images to amplify the impact of their posts. Furthermore, these accounts disseminate incitement and demoralization videos produced by organizations like Hamas and Hezbollah, often timed to coincide with sensitive events, such as attacks attributed to Iran or internal political crises, exacerbating existing tensions within Israeli society. The combination of sophisticated AI techniques, targeted messaging, and strategic timing amplifies the network’s potential to disrupt and manipulate public discourse. The lack of a response from Meta at the time of publication further underscores the challenges in addressing this complex issue. The increasing sophistication of these influence campaigns demands a proactive and coordinated response from both social media platforms and government entities to protect online communities and safeguard democratic processes.