OpenAI has launched a new AI model called GPT-5.4-Cyber.
It’s aimed at helping cybersecurity experts handle real-world threats more effectively.
Along with this, the company is expanding its Trusted Access for Cyber (TAC) programme, allowing more verified users to get involved.
The model focuses on practical security tasks. It can help analyse compiled software to find bugs, weaknesses, or hidden malware—even when the original code isn’t available. For many security teams, that’s a big advantage.
OpenAI says it has made the system easier to use for genuine cybersecurity work by reducing some restrictions. At the same time, access is still controlled and given only to trusted users.
Through the expanded TAC programme, thousands of individual defenders and hundreds of teams working on critical systems are expected to gain access. Entry depends on identity checks and trust levels.
Those in higher access tiers will be able to use GPT-5.4-Cyber, which is described as a more advanced and less restricted version of its base model, tailored for cybersecurity needs.
The company says its approach is simple—expand access carefully while keeping safeguards in place. Individuals can verify themselves directly, while organisations can apply through official channels. Once approved, users can work with fewer limitations on sensitive security tasks.
For now, the model will only be available to a limited group, including trusted organisations, researchers, and vendors. OpenAI also noted that some limits may still apply, especially in cases where system usage is harder to track.
This move builds on earlier updates in GPT-5.2, GPT-5.3-Codex, and GPT-5.4, where the company added more security-focused improvements. It has also backed a $10 million cybersecurity grant programme and introduced tools like Codex Security to help find and fix vulnerabilities.
According to OpenAI, these tools have already helped resolve thousands of serious issues.
The company also pointed out that cyber risks are already growing fast. As AI systems become more powerful, the need for stronger safeguards will only increase.
The timing is notable. Just recently, Anthropic introduced its Claude Mythos model, which drew attention for its ability to find—and even exploit—software vulnerabilities.
Also Read: Anthropic Accidentally Exposes Claude Code Source Code








