Google’s PROMPTFLUX Malware Uses AI to Evade Detection, Changing Code Hourly

Google Unveils PROMPTFLUX: AI-Driven Malware That Evolves Hourly

In a groundbreaking revelation, Google’s Threat Intelligence Group (GTIG) has identified a novel malware strain named PROMPTFLUX. This sophisticated Visual Basic Script (VBScript) malware leverages Google’s Gemini artificial intelligence (AI) model to dynamically rewrite its own code, enhancing its ability to evade detection.

PROMPTFLUX operates by interacting with the Gemini API, requesting specific obfuscation and evasion techniques to modify its source code in real-time. This just-in-time self-modification strategy is designed to circumvent traditional static signature-based detection methods employed by antivirus software.

A key component of PROMPTFLUX, termed Thinking Robot, periodically queries the Gemini large language model (LLM), specifically versions 1.5 Flash and later, to obtain new code that helps it remain undetected. This process involves using a hard-coded API key to send precise, machine-readable prompts to the Gemini API endpoint, instructing the model to generate VBScript code optimized for antivirus evasion.

Once the new, obfuscated code is generated, PROMPTFLUX saves it to the Windows Startup folder, ensuring persistence on the infected system. Additionally, the malware attempts to propagate by copying itself to removable drives and mapped network shares, increasing its reach and potential impact.

Although the self-modification function, labeled as AttemptToUpdateSelf, is currently commented out in the code, its presence, along with active logging of AI responses to a temporary log file, indicates the developer’s intention to create a metamorphic script capable of evolving over time.

GTIG has observed multiple variations of PROMPTFLUX incorporating LLM-driven code regeneration. One notable version instructs the LLM to act as an expert VBScript obfuscator, rewriting the malware’s entire source code every hour.

At present, PROMPTFLUX appears to be in the development or testing phase, lacking mechanisms to compromise victim networks or devices directly. The identity of the threat actor behind this malware remains unknown. However, indications suggest a financially motivated entity employing a broad, indiscriminate approach to target a wide range of users across various geographies and industries.

Security researcher Marcus Hutchins has expressed skepticism regarding the effectiveness of such AI-driven malware. He points out that the embedded prompts assume the AI model inherently knows how to evade antivirus detection, which may not be the case. Additionally, the lack of mechanisms to ensure the self-modifying code differs from previous versions or functions correctly raises questions about the malware’s practical efficacy.

This discovery underscores a concerning trend: adversaries are increasingly utilizing AI not just for productivity gains but to develop tools capable of adjusting their behavior during execution. Furthermore, these purpose-built tools are being sold on underground forums for financial gain.

Other instances of LLM-powered malware identified by Google include:

– FRUITSHELL: A reverse shell written in PowerShell that incorporates hard-coded prompts to bypass detection or analysis by LLM-powered security systems.

– PROMPTLOCK: A cross-platform ransomware written in Go, utilizing an LLM to dynamically generate and execute malicious Lua scripts at runtime. This has been identified as a proof-of-concept.

– PROMPTSTEAL (aka LAMEHUG): A data miner employed by the Russian state-sponsored actor APT28 in attacks targeting various entities.

The emergence of AI-driven malware like PROMPTFLUX highlights the evolving landscape of cybersecurity threats. As malicious actors continue to innovate, leveraging advanced technologies to enhance their capabilities, it becomes imperative for cybersecurity professionals to stay ahead by developing robust detection and mitigation strategies.