In early 2025, cybersecurity researchers uncovered a sophisticated cyberattack campaign where threat actors created counterfeit versions of the popular AI image generation platform, Kling AI, to distribute malware to unsuspecting users. Since its launch in June 2024, Kling AI has attracted over 6 million users, making it a prime target for cybercriminals seeking to exploit the growing interest in AI-powered media creation tools.
The Attack Methodology
The attackers meticulously replicated Kling AI’s legitimate service, creating nearly indistinguishable user experiences that concealed their malicious intent. They established approximately 70 fake Facebook pages and promoted posts directing users to convincing spoof websites with domains like klingaimedia.com and klingaistudio.com. These advertisements appeared legitimate, employing graphics and messaging consistent with genuine AI generation services.
Upon accessing these counterfeit pages, visitors were prompted to upload images or text prompts for AI enhancement or transformation, mimicking the standard workflow of legitimate generative AI platforms. However, instead of receiving the promised AI-generated media, victims downloaded files containing sophisticated malware.
Technical Sophistication
The infection chain employed several deceptive techniques to bypass user awareness, including filename masquerading that made executable files appear as innocent media outputs. Downloaded files appeared to be standard media outputs (like Generated_Image_2025_97607092.jpg) but contained hidden executable code. The attackers used Hangul Filler characters (UTF-8 hex encoding 0xE3 0x85 0xA4) to extend filenames to 292 bytes, with the actual file extension .exe pushed far to the right and typically not visible in standard file dialogs.
This technique proved particularly effective as Windows Explorer displayed these files with image icons despite being classified as Application type. Once executed, the malware performed sophisticated environment checks to avoid analysis tools, with one variant employing .NET Native AOT compilation to further complicate detection.
Global Reach and Impact
The global reach of this campaign suggests a well-resourced threat actor with infrastructure capable of targeting users across diverse geographic regions. Check Point researchers identified this campaign after tracking unusual malware delivery patterns beginning in early 2025. Their analysis revealed victims across multiple regions, with particularly high concentrations in Asia.
Broader Implications
This incident underscores a growing trend where cybercriminals exploit the popularity of AI tools to distribute malware. Similar tactics have been observed with other AI platforms. For instance, in April 2025, a Facebook malvertising campaign targeted users searching for AI image editing tools, leading them to install fake apps that mimicked legitimate software but contained password-stealing malware. The attackers exploited the popularity of AI-driven image-generation tools by creating malicious websites that closely resembled legitimate services. Victims were tricked into installing the Lumma Stealer malware, which exfiltrated sensitive information like credentials, cryptocurrency wallet files, browser data, and password manager databases. ([bleepingcomputer.com](https://www.bleepingcomputer.com/news/security/fake-ai-editor-ads-on-facebook-push-password-stealing-malware/amp/?utm_source=openai))
Additionally, in May 2023, Meta’s security team reported that hackers were exploiting interest in ChatGPT by creating malicious software that claimed to offer ChatGPT-based tools via browser extensions and online app stores. These tools contained malware designed to give hackers access to people’s devices. Meta found hackers using software that claimed to offer ChatGPT-based tools via browser extensions and online app stores that contained malware designed to give hackers access to people’s devices. ([cnn.com](https://www.cnn.com/2023/05/03/tech/chatgpt-hackers-meta/index.html/?utm_source=openai))
Protective Measures
To safeguard against such sophisticated attacks, users should adopt the following practices:
1. Verify Authenticity: Always access AI tools and services through their official websites. Be cautious of links provided in advertisements or social media posts.
2. Scrutinize Downloads: Before downloading any files, check the file extensions and ensure they match the expected format. Be wary of files with unusually long names or unfamiliar characters.
3. Maintain Updated Security Software: Regularly update antivirus and anti-malware programs to detect and prevent the latest threats.
4. Educate Yourself and Others: Stay informed about common cyberattack methods and share this knowledge with peers to foster a more secure online environment.
By remaining vigilant and adopting these protective measures, users can better defend themselves against the evolving tactics of cybercriminals exploiting the popularity of AI tools.