Social Media Hijackings Fuel AI Photo Editor Scam: Threat Actors Steal Credentials and Deploy Malware
A sophisticated malvertising campaign is exploiting the growing popularity of AI photo editing tools to steal credentials and distribute malware. Threat actors are hijacking social media pages, primarily those related to photography, renaming them to mimic legitimate AI photo editor brands, and then using paid advertisements to boost malicious posts containing links to fake websites. These websites, designed to mirror the authentic platforms, trick unsuspecting users into downloading what they believe is a photo editor but is, in fact, an endpoint management utility that grants the attackers remote control of their devices.
The attack begins with the compromise of social media accounts, typically achieved through phishing. Threat actors send spam messages containing malicious links, often disguised as personalized link pages or utilizing Facebook’s open redirect URL to appear legitimate. These links lead to fraudulent account protection pages that prompt users to enter their login credentials, including phone numbers, email addresses, birthdays, and passwords. Once the attackers gain access to the account, they quickly change the page name to resemble a popular AI photo editor, like Evoto in the observed cases, and begin posting malicious advertisements.
These ads, promoting the fake AI photo editor, redirect users to convincingly designed websites that closely mimic the legitimate photo editor’s site. This deceptive tactic effectively lures victims into downloading the malicious installer package. The package itself is a legitimate endpoint management utility, ITarian, configured maliciously by the attackers. This clever use of a legitimate tool allows the attackers to bypass initial security scans, as the installer file does not contain inherently malicious components. Instead, the malicious configuration is retrieved upon execution, further obscuring the attack.
Upon installation, the ITarian software enables the attackers to remotely control the victim’s device. They then deploy scheduled tasks that download and execute additional payloads, primarily the Lumma Stealer malware. This stealer is designed to exfiltrate a wide range of sensitive data, including cryptocurrency wallet files, browser data, password manager databases, and other valuable information. The attackers also deploy a script that disables Microsoft Defender’s scanning capabilities on the C: drive, further compromising the victim’s security and enabling persistence.
The Lumma Stealer operation is characterized by specific communication patterns with its command-and-control server. This involves two consecutive POST requests, the second of which returns a Base64 encoded configuration file. This configuration file, once decrypted, reveals the stealer’s comprehensive list of targeted data. The attack’s scale is significant, with download statistics embedded within the malicious JavaScript revealing thousands of downloads across both Windows and macOS platforms, although the macOS version currently appears to be a harmless redirect to apple.com.
This sophisticated campaign underscores the increasing threat posed by social media-based attacks and the ingenuity of cybercriminals in exploiting trending technologies like AI. To protect themselves, users are strongly advised to enable multi-factor authentication (MFA) on all social media accounts and employ strong, unique passwords. Regularly updating software and exercising caution when clicking on links, especially those requesting personal information or login credentials, are crucial. Monitoring social media accounts for unusual activity, such as unexpected login attempts or changes to account information, can also help detect potential compromises.
For organizations, educating employees about phishing tactics and investing in robust security solutions is essential. Endpoint protection platforms that offer multi-layered defense and behavior detection capabilities can help identify and block malicious tools like ITarian before they can inflict damage. In the context of the broader threat landscape surrounding AI, tools like deepfake detectors can provide added protection against AI-powered scams during video calls, further bolstering security against the evolving tactics employed by cybercriminals. The ongoing abuse of legitimate tools and platforms highlights the need for constant vigilance and proactive security measures. By understanding the techniques used in these attacks, both individuals and organizations can better protect themselves from falling victim to these increasingly sophisticated schemes.