As AI capabilities in cybersecurity accelerate, OpenAI is expanding its Trusted Access for Cyber (TAC) program, opening advanced defensive tools to a broader pool of verified security professionals while introducing a more permissive model tailored for cyber use cases.
At the center of the update is GPT-5.4-Cyber, a specialized version of its latest model designed to assist with advanced defensive workflows such as vulnerability analysis and binary reverse engineering. The model lowers certain safety restrictions for vetted users, enabling deeper investigation into software security risks while maintaining controlled access.
Scaling Access Without Losing Control
The TAC program, first introduced earlier this year, is now being expanded to include thousands of individual defenders and hundreds of organizations responsible for securing critical infrastructure. Access is tiered, with higher levels granted to users who undergo stricter identity verification and demonstrate legitimate cybersecurity use cases.
Rather than broadly releasing its most capable systems, OpenAI is taking a structured approach –balancing accessibility with safeguards. The company emphasizes automated verification, trust signals, and usage monitoring to determine who can access more powerful capabilities, avoiding fully centralized or arbitrary gatekeeping.
This reflects a broader industry trend. Competitors like Anthropic have similarly restricted access to advanced models such as Claude Mythos, highlighting growing concerns over the dual-use nature of AI systems that can both defend and attack digital infrastructure.
AI Is Accelerating Both Sides of Cybersecurity
OpenAI’s strategy acknowledges a core reality: AI is already being used by both defenders and attackers. Existing models can analyze codebases, identify vulnerabilities, and assist in exploit development, lowering the barrier to entry for cyber operations.
To counterbalance this, OpenAI has been investing in defensive tooling, including Codex Security, which has already contributed to fixing thousands of vulnerabilities across software ecosystems. The company is also backing its efforts with initiatives like a $10 million cybersecurity grant program and support for open-source projects.
The introduction of GPT-5.4-Cyber builds on this trajectory, aiming to enhance defenders’ ability to detect and remediate risks faster—especially as agentic coding systems and AI-assisted development expand the overall attack surface.
A Shift Toward Continuous Security
A key theme in OpenAI’s approach is moving away from periodic security audits toward continuous, AI-assisted defense. By embedding advanced models directly into development workflows, the goal is to identify and fix vulnerabilities in real time as software is written.
This shift mirrors broader changes across the industry, where AI is increasingly integrated into coding, infrastructure management, and enterprise automation, often creating new vulnerabilities alongside productivity gains.
Controlled Deployment for High-Risk Capabilities
Because GPT-5.4-Cyber is more permissive than standard models, its rollout is intentionally limited. Initial access is restricted to vetted security vendors, researchers, and organizations, with additional controls applied in environments where OpenAI has less visibility into usage.
The company argues that this iterative deployment model – gradually expanding access while refining safeguards – is essential as AI capabilities continue to evolve rapidly.
The Bigger Picture
OpenAI’s expansion of TAC signals a shift in how frontier AI systems are being deployed. Rather than mass releases, the most powerful capabilities are increasingly distributed through controlled programs aimed at trusted users.
As cybersecurity risks grow alongside AI capabilities, the balance between openness and restriction is becoming a defining challenge for the industry. OpenAI’s approach suggests that future models, especially those with strong cyber capabilities, will be rolled out not as general-purpose tools, but as specialized systems governed by access, verification, and ongoing oversight.