OpenAI has unveiled GPT-5.4-Cyber, a specialized iteration of its flagship GPT-5.4 model, meticulously engineered to bolster defensive cybersecurity operations. This advanced model equips verified security professionals with enhanced capabilities, including binary reverse engineering, vulnerability scanning, and malware analysis, all with fewer restrictions compared to standard models.
GPT-5.4-Cyber is specifically trained to lower the refusal boundary for legitimate cybersecurity tasks. It empowers security experts to scrutinize compiled software for potential malware, identify vulnerabilities, and critically assess security robustness without necessitating access to a target’s source code. This binary reverse-engineering capability marks a significant advancement, providing defenders with a powerful tool to inspect software at the machine-code level—a function previously limited to specialized analysts and threat hunters.
OpenAI has classified GPT-5.4 as possessing High cyber capability under its Preparedness Framework, reflecting the model’s elevated potential for dual-use risk. The GPT-5.4-Cyber variant further relaxes these guardrails for authenticated defenders, making it more permissive within controlled, verified environments.
Expanding the Trusted Access for Cyber Program
In conjunction with the model’s launch, OpenAI is significantly scaling its Trusted Access for Cyber (TAC) program. This expansion aims to include thousands of verified individual defenders and hundreds of teams responsible for safeguarding critical software.
Introduced in February 2026, the TAC program now features additional tiers of access, with higher levels of verification unlocking progressively more powerful capabilities. Customers approved for the highest TAC tier gain access to GPT-5.4-Cyber, supporting advanced defensive workflows such as vulnerability research, exploit analysis, and agentic security automation.
Individual users can verify their identity at chatgpt.com/cyber, while enterprise teams can request access through their OpenAI representative. Due to its more permissive design, initial deployment is deliberately limited to vetted security vendors, organizations, and researchers. OpenAI notes that access to permissive models may come with restrictions, particularly in Zero-Data Retention (ZDR) environments where the company has less direct visibility into user intent.
Codex Security and the Broader Defensive Ecosystem
The launch of GPT-5.4-Cyber is part of a broader cybersecurity strategy that OpenAI describes as scaling cyber defense in tandem with increasing model capabilities.
A core pillar of this strategy is Codex Security, which automatically monitors codebases, validates issues, and proposes fixes. Since its recent research preview launch, Codex Security has contributed to fixing over 3,000 critical and high-severity vulnerabilities across the ecosystem, in addition to numerous lower-severity findings.
OpenAI also highlighted that capture-the-flag (CTF) benchmark performance across its models improved from 27% on GPT-5 in August 2025 to significantly higher scores with current-generation models, demonstrating rapid capability growth in offensive and defensive cyber tasks.
This move comes one week after rival Anthropic released Claude Mythos to the cybersecurity industry, signaling an escalating AI arms race focused on security-specific model variants.
OpenAI’s TAC program distinguishes itself by emphasizing democratized access, using robust KYC and automated identity verification to expand access based on objective trust signals rather than manual gating decisions.
The company maintains that its safeguards—including account-level monitoring, asynchronous content classifiers, and tiered verification—are sufficient to reduce cyber misuse risk while enabling legitimate defenders to operate at scale.
OpenAI also cautioned that future, more advanced models may necessitate additional safeguards and verification processes to ensure responsible use.