Cybercriminals Exploit ChatGPT to Evade Antivirus Defenses

In July 2025, a sophisticated cyberattack campaign emerged, showcasing the alarming fusion of generative artificial intelligence (AI) with traditional evasion techniques. Cybercriminals leveraged OpenAI’s ChatGPT to create deepfake images of government identification cards, embedding them within spear-phishing emails designed to bypass conventional antivirus defenses.

The Attack Mechanism

The attackers, identified as the Kimsuky group, initiated the campaign by sending emails that impersonated military and security institutions. These emails contained AI-generated ID card images and urged recipients to review draft versions of these cards. Upon clicking the provided link, victims downloaded a compressed archive named `Government_ID_Draft.zip`, which contained a shortcut file (`Government_ID_Draft.lnk`) masquerading as a legitimate document.

When executed, this shortcut invoked the command prompt (`cmd.exe`) to assemble and execute malicious instructions stored within environment variables. This method effectively concealed the malicious intent from signature-based antivirus engines. The script then initiated a series of HTTP requests to retrieve a deepfake PNG file and a batch script, both of which executed immediately upon download.

Advanced Obfuscation Techniques

The batch script employed environment-variable slicing—a technique that extracts characters one at a time using expressions like `% ab901ab [:] ~ 7,1 %`. This approach dynamically reconstructed the commands required for payload deployment, effectively evading heuristic detection by delaying visible actions until the full command string was built.

Metadata analysis of the downloaded image confirmed its AI-generated origin, identifying it as a deepfake with a 98% probability. Despite the reliance on advanced AI heuristics, the campaign also utilized classic persistence strategies. Victims’ machines registered scheduled tasks under the guise of legitimate software updates, ensuring the payload ran at regular intervals.

Implications for Cybersecurity

This campaign underscores a troubling evolution in cybercriminal tactics, blending artificial intelligence with legacy evasion techniques to infiltrate sensitive networks. The combined use of generative AI assets and automated scripting created a hybrid threat that challenges conventional antivirus products.

Security teams must augment their defenses with behavioral analysis and endpoint detection and response (EDR) solutions capable of monitoring script activity and scheduled-task creation in real time. Additionally, organizations should invest in AI-powered security solutions that can detect patterns indicative of malicious activity, even when traditional methods fail.

Recommendations

To combat the misuse of AI in cyberattacks, organizations should consider the following measures:

– AI-Powered Security Solutions: Implement advanced machine learning models capable of detecting patterns indicative of malicious activity, even when traditional methods fail.

– Ethical Guidelines in AI Development: Developers must implement robust safeguards to prevent the misuse of AI technologies.

– Legislative Action: Governments should regulate the distribution and use of generative AI tools while holding developers accountable for misuse.

– Cybersecurity Awareness: Organizations must educate employees about identifying phishing attempts and other cyber threats.

The emergence of AI-driven cyberattacks like this one exemplifies how advancements in AI can be exploited for malicious purposes when ethical boundaries are removed. As cybercriminals increasingly adopt such tools, the cybersecurity community must innovate equally sophisticated defenses. The battle between malicious and defensive uses of AI will likely define the future landscape of cybersecurity.