OpenAI Introduces GPT-5.4-Cyber, Expands Trusted Access Program

OpenAI is scaling its Trusted Access for Cyber program and introducing GPT-5.4-Cyber to support vetted defenders as AI-driven security risks accelerate.

By Marcus Lee Edited by Maria Konash Published: Updated:
OpenAI expands Trusted Access and launches GPT-5.4-Cyber to support defenders against rising threats. Image: Dima Solomin / Unsplash

As AI capabilities in cybersecurity accelerate, OpenAI is expanding its Trusted Access for Cyber (TAC) program, opening advanced defensive tools to a broader pool of verified security professionals while introducing a more permissive model tailored for cyber use cases.

At the center of the update is GPT-5.4-Cyber, a specialized version of its latest model designed to assist with advanced defensive workflows such as vulnerability analysis and binary reverse engineering. The model lowers certain safety restrictions for vetted users, enabling deeper investigation into software security risks while maintaining controlled access.

Scaling Access Without Losing Control

The TAC program, first introduced earlier this year, is now being expanded to include thousands of individual defenders and hundreds of organizations responsible for securing critical infrastructure. Access is tiered, with higher levels granted to users who undergo stricter identity verification and demonstrate legitimate cybersecurity use cases.

Rather than broadly releasing its most capable systems, OpenAI is taking a structured approach –balancing accessibility with safeguards. The company emphasizes automated verification, trust signals, and usage monitoring to determine who can access more powerful capabilities, avoiding fully centralized or arbitrary gatekeeping.

This reflects a broader industry trend. Competitors like Anthropic have similarly restricted access to advanced models such as Claude Mythos, highlighting growing concerns over the dual-use nature of AI systems that can both defend and attack digital infrastructure.

AI Is Accelerating Both Sides of Cybersecurity

OpenAI’s strategy acknowledges a core reality: AI is already being used by both defenders and attackers. Existing models can analyze codebases, identify vulnerabilities, and assist in exploit development, lowering the barrier to entry for cyber operations.

To counterbalance this, OpenAI has been investing in defensive tooling, including Codex Security, which has already contributed to fixing thousands of vulnerabilities across software ecosystems. The company is also backing its efforts with initiatives like a $10 million cybersecurity grant program and support for open-source projects.

The introduction of GPT-5.4-Cyber builds on this trajectory, aiming to enhance defenders’ ability to detect and remediate risks faster—especially as agentic coding systems and AI-assisted development expand the overall attack surface.

A Shift Toward Continuous Security

A key theme in OpenAI’s approach is moving away from periodic security audits toward continuous, AI-assisted defense. By embedding advanced models directly into development workflows, the goal is to identify and fix vulnerabilities in real time as software is written.

This shift mirrors broader changes across the industry, where AI is increasingly integrated into coding, infrastructure management, and enterprise automation, often creating new vulnerabilities alongside productivity gains.

Controlled Deployment for High-Risk Capabilities

Because GPT-5.4-Cyber is more permissive than standard models, its rollout is intentionally limited. Initial access is restricted to vetted security vendors, researchers, and organizations, with additional controls applied in environments where OpenAI has less visibility into usage.

The company argues that this iterative deployment model – gradually expanding access while refining safeguards – is essential as AI capabilities continue to evolve rapidly.

The Bigger Picture

OpenAI’s expansion of TAC signals a shift in how frontier AI systems are being deployed. Rather than mass releases, the most powerful capabilities are increasingly distributed through controlled programs aimed at trusted users.

As cybersecurity risks grow alongside AI capabilities, the balance between openness and restriction is becoming a defining challenge for the industry. OpenAI’s approach suggests that future models, especially those with strong cyber capabilities, will be rolled out not as general-purpose tools, but as specialized systems governed by access, verification, and ongoing oversight.

Nvidia Launches First Open-Source AI Models for Quantum Computing

Nvidia unveils Ising, a new open-source AI model family designed to tackle quantum computing’s biggest bottlenecks: calibration and error correction.

By Daniel Mercer Edited by Maria Konash Published:
Nvidia launches Ising, open-source AI models to improve quantum computing calibration and error correction. Image: Nvidia

Nvidia is pushing deeper into the future of computing with the launch of Ising, a new family of open-source AI models designed to solve some of quantum computing’s hardest problems, calibration and error correction.

The models aim to bridge a critical gap. Today’s quantum systems are powerful but fragile, and scaling them into reliable, real-world machines depends on overcoming persistent noise, instability, and error rates. Nvidia is betting that AI, not just physics, will be the key unlock.

Turning AI Into Quantum Infrastructure

Named after the Ising model, the system provides tools that act almost like an operating layer for quantum machines. According to Nvidia, Ising models can deliver up to 2.5x faster performance and 3x greater accuracy in quantum error correction compared to traditional approaches.

The family includes two core components:

  • Ising Calibration: A vision language model that interprets quantum processor signals and automates calibration, reducing processes that once took days down to hours.
  • Ising Decoding: Neural network models that handle real-time error correction, a fundamental requirement for scaling quantum systems.

Together, they move AI closer to being the “control plane” for quantum hardware. CEO Jensen Huang described this as essential to making quantum computing practical.

From Fragile Qubits to Scalable Systems

Quantum computers rely on qubits, which are notoriously sensitive to environmental noise. Even small disturbances can introduce errors, making large-scale, reliable computation extremely difficult.

Ising directly targets this bottleneck by automating both calibration and error correction. These are processes that traditionally require intensive manual tuning and specialized expertise.

The models are also designed to integrate with Nvidia’s broader ecosystem, including CUDA-Q software and NVQLink hardware. This enables hybrid systems where classical GPUs and quantum processors work together in real time.

Open Source as a Strategic Move

Unlike many frontier AI systems, Nvidia is releasing Ising as open source. The models can run locally, allowing researchers and enterprises to maintain full control over sensitive data and customize them for specific quantum architectures.

This approach reflects a broader shift in AI infrastructure. Open models are increasingly used to accelerate adoption in specialized domains where customization and data privacy are critical.

A Bigger Bet on AI-Driven Science

Ising is part of Nvidia’s expanding portfolio of domain-specific AI models, joining systems like Nemotron for agents, BioNeMo for biotech, and Isaac GR00T for robotics.

The broader strategy is clear. Apply AI not just to software, but to foundational scientific and industrial challenges, from biology to robotics to quantum computing.

With the quantum computing market projected to exceed $11 billion by 2030, tools like Ising could play a critical role in determining whether the technology transitions from experimental promise to real-world utility.

AI & Machine Learning, News

AI Is Silently Making Cybersecurity Talent More Valuable Than Ever

New AI cybersecurity systems like Anthropic’s Project Glasswing could increase demand for security professionals as threats and vulnerabilities scale faster.

By Marcus Lee Edited by Maria Konash Published:
AI cybersecurity tools like Glasswing drive demand for security talent as threats grow. Image: Philipp Katzenberger / Unsplash

Anthropic’s recent launch of Project Glasswing and its experimental Claude Mythos model is triggering debate across the cybersecurity industry, with some experts warning it could significantly reshape both threat detection and workforce demand. Tal Hoffman, founder of EnclaveAI, said the shift could sharply increase demand for cybersecurity professionals as AI-driven tools expose more vulnerabilities at scale. While specific claims about the model’s capabilities remain under scrutiny, the broader direction points to a move toward AI systems capable of autonomously identifying and validating software weaknesses.

Project Glasswing brings together major industry players, including Amazon Web Services, Google, Microsoft, and CrowdStrike, to apply advanced AI models to defensive cybersecurity. Early reports suggest such systems can uncover high-severity vulnerabilities in mature codebases and, crucially, demonstrate whether those flaws are exploitable. This ability to move from detection to validation marks a significant shift in how security risks are assessed.

From Discovery to Exploitation

Traditionally, vulnerability scanning has been plagued by false positives, requiring extensive manual triage by security teams. AI-driven systems promise to improve signal quality by surfacing fewer but more meaningful findings. More importantly, they can potentially automate exploit validation, reducing the gap between identifying a flaw and proving it can be used in an attack.

This shift could dramatically increase the volume of actionable vulnerabilities. While automation may reduce some manual work, it also creates a new bottleneck: remediation. Security teams may face a surge in verified issues that require prioritization, architectural decisions, and fixes—tasks that still depend heavily on human expertise.

Growing Asymmetry in Cyber Defense

The controlled rollout of these tools highlights another emerging challenge: uneven access. Anthropic has restricted availability of its most advanced models to a limited group of organizations, citing safety concerns. While this approach aligns with responsible deployment practices, it creates a gap between companies with access to advanced AI defenses and those without.

At the same time, the attack surface is expanding rapidly. AI agents, internal automation tools, and integrations across business functions are introducing new vulnerabilities, often without formal security review. These developments are creating new categories of risk, including prompt injection attacks and insecure AI-driven workflows.

Rising Demand for Security Talent

Despite advances in automation, experts argue that demand for cybersecurity professionals is likely to increase rather than decline. As AI systems surface more vulnerabilities and accelerate workflows, organizations will need more specialists to interpret findings, implement fixes, and manage evolving threat models.

The role of security teams is shifting from finding vulnerabilities to managing and mitigating them at scale. This transition could elevate cybersecurity from a back-office function to a strategic priority, particularly as AI-driven risks begin to impact critical infrastructure and enterprise systems.

In the longer term, AI-powered tools may strengthen defenders by enabling continuous monitoring and faster response times. However, in the near term, the gap between emerging capabilities and widespread access remains a key challenge. Organizations that invest early in AI-driven security tools and talent may be better positioned to navigate this transition as the cybersecurity landscape evolves.

AI & Machine Learning, Cybersecurity & Privacy, News

Claude Code Gets ‘Routines’ to Enable Autonomous AI Tasks

Anthropic has introduced “Routines” in Claude Code, enabling autonomous AI workflows triggered by schedules, APIs, or GitHub events.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic launches Claude Code Routines for trigger-based, autonomous AI workflows. Image: Claude Code

Anthropic has introduced a new feature called “Routines” to its Claude Code platform, marking a step toward fully autonomous AI workflows that can operate continuously in the background. The feature, currently in research preview, allows users to define tasks that run automatically based on triggers such as schedules, API calls, or GitHub events.

Routines effectively transform Claude Code from an interactive coding assistant into a persistent, cloud-based agent. Once configured, a routine can execute tasks without user intervention, even when a device is offline. Each routine combines a prompt, connected repositories, and external integrations into a reusable workflow that can be triggered repeatedly.

The system supports multiple trigger types. Scheduled routines can run at regular intervals such as hourly or weekly, while API-triggered routines can be activated programmatically via HTTP requests. GitHub-based triggers allow the system to respond automatically to events like pull requests or releases. These triggers can be combined, enabling more complex automation scenarios.

From Assistant to Autonomous Agent

The introduction of Routines reflects a broader shift in AI tools from reactive assistants to proactive agents. Instead of responding to prompts, Claude Code can now initiate actions such as reviewing code, triaging alerts, updating documentation, or managing workflows across tools like Slack and GitHub.

Example use cases include automated code reviews, incident response workflows, backlog management, and deployment verification. In these scenarios, Claude can analyze data, generate outputs such as pull requests, and communicate results without human input, leaving users to review outcomes rather than perform repetitive tasks.

Routines run as full cloud sessions with access to selected repositories, environments, and connectors. This allows them to execute shell commands, interact with external services, and modify codebases, depending on permissions. However, the feature also introduces new considerations around security and governance, as actions are performed under the user’s identity and can affect production systems.

Expanding the Agent Ecosystem

The launch comes as Anthropic continues to expand its developer-focused offerings and compete with other AI platforms in building agent-based systems. Routines are available across Pro, Team, and Enterprise plans, signaling a push toward enterprise adoption where automation and integration are key requirements.

The update also arrives alongside expectations of a new model release, widely anticipated to further enhance Claude’s reasoning and coding capabilities. While details remain limited, the combination of more capable models and autonomous execution features points to a future where AI systems handle increasingly complex workflows end-to-end.

With Routines, Anthropic is positioning Claude Code not just as a tool for developers, but as an infrastructure layer for automated work. As organizations experiment with these capabilities, the balance between efficiency gains and operational risk will likely shape how quickly such systems are adopted at scale.

Half of Americans Now Use AI Weekly, ChatGPT Leads

A new national poll shows 50% of Americans used AI in the past week, with ChatGPT leading adoption across work, learning, and creative tasks.

By Samantha Reed Edited by Maria Konash Published:
Poll finds half of Americans use AI weekly, with ChatGPT leading across work and creativity. Image: Eyestetix Studio / Unsplash

A new national poll from Epoch AI and Ipsos finds that artificial intelligence has reached mainstream adoption in the United States, with 50% of adults reporting they used an AI service in the past week. The data highlights how quickly AI tools have moved from niche technology to everyday utility, with ChatGPT emerging as the most widely used platform.

According to the survey, 31% of Americans reported using ChatGPT in the past week, ahead of competitors such as Google Gemini (21%), Microsoft Copilot (11%), and Meta AI (8%). Usage is also frequent, with 65% of AI users engaging with these tools multiple days per week and 16% using them nearly every day.

The findings show that AI is being applied across a wide range of tasks. Around 80% of users rely on AI for information lookup or recommendations, while 59% use it for writing and editing, 55% for learning or advice, and 53% for brainstorming ideas. More advanced use cases are also gaining traction, including image generation (44%) and data analysis or programming (37%).

AI Becomes a Daily Tool

The poll suggests AI is becoming embedded in everyday digital workflows. Most users interact with AI by typing prompts directly, but many also rely on integrated experiences such as AI-powered search summaries or built-in assistants within productivity software. For example, a majority of Copilot users access AI within tools like Word, Excel, or Teams, while nearly half of Gemini users encounter AI-generated summaries in search results.

This integration is accelerating adoption by reducing friction. Rather than seeking out standalone tools, users increasingly encounter AI as part of the platforms they already use, making it a default layer in digital interactions.

Impact on Work and Productivity

The survey also highlights AI’s growing role in the workplace. Among employed respondents who use AI, 51% report using it for work-related tasks. Within that group, 26% primarily use AI for work, while another 25% split usage evenly between professional and personal purposes.

AI is already reshaping job responsibilities. One in five workers said AI now performs tasks they previously handled themselves, while 15% reported taking on new responsibilities enabled by AI tools. Despite this, access remains uneven: half of workers using AI rely on personal accounts or free versions, while only one-third use tools provided by their employer.

The results point to a transitional phase in enterprise adoption. While individuals are rapidly integrating AI into their workflows, many organizations are still formalizing policies, infrastructure, and access. As companies catch up, AI usage is likely to become more standardized across workplaces.

Overall, the data underscores a shift from experimentation to routine use. With half of Americans already engaging with AI weekly, tools like ChatGPT and its competitors are becoming a foundational layer of modern work and everyday decision-making.

AI & Machine Learning, Consumer Tech, News

AWS Launches Amazon Bio Discovery to Accelerate Drug Design

AWS has launched Amazon Bio Discovery, an AI-powered platform that helps scientists design, test, and refine drugs faster using integrated models and lab workflows.

By Laura Bennett Edited by Maria Konash Published:

Amazon Web Services has launched Amazon Bio Discovery, a new AI-powered application designed to help scientists accelerate drug discovery by combining machine learning models with real-world lab testing. The platform introduces a “lab-in-the-loop” workflow, where AI-generated drug candidates are tested experimentally and fed back into the system to improve future results.

The application provides access to a broad catalog of biological foundation models, or bioFMs, trained on large biological datasets. These models can generate and evaluate potential drug candidates, particularly antibodies, during early-stage research. Scientists interact with the system through an AI agent that helps design experiments, select appropriate models, and optimize inputs using natural language rather than code.

Amazon Bio Discovery is designed to lower barriers to AI adoption in life sciences. Traditionally, using advanced models required specialized computational expertise and infrastructure. The new platform simplifies this process by offering pre-benchmarked models, automated workflows, and integrated tools for comparing performance. Researchers can also fine-tune models using their own experimental data without building custom pipelines, keeping proprietary data secure within their organization.

Closing the Loop Between AI and the Lab

A key feature of the platform is its integration with laboratory partners, including Twist Bioscience and Ginkgo Bioworks. Scientists can send AI-generated candidates directly for synthesis and testing, with results automatically routed back into the system. This creates a continuous feedback loop, allowing each experiment to improve the next iteration.

The approach has already shown early results. In collaboration with Memorial Sloan Kettering Cancer Center, researchers used the platform to design hundreds of thousands of antibody candidates for pediatric cancer therapies. What traditionally takes months or even a year was reduced to a matter of weeks, from initial design to lab testing.

Democratizing AI in Life Sciences

Amazon Bio Discovery reflects a broader push to make advanced AI tools accessible to a wider range of scientists, not just those with machine learning expertise. By combining model access, experiment design, and lab coordination into a single platform, AWS aims to streamline workflows that are often fragmented across multiple systems and teams.

The platform is built on infrastructure already widely used in the pharmaceutical industry, with AWS noting that 19 of the top 20 global drugmakers rely on its cloud services. Early adopters include Bayer, the Broad Institute, and Fred Hutch Cancer Center. The launch also aligns with a wider wave of AI-driven partnerships across the sector, such as Novo Nordisk teaming up with OpenAI to accelerate drug discovery for obesity and diabetes treatments.

As AI becomes more embedded in drug development, platforms like Amazon Bio Discovery highlight a shift toward integrated systems that connect computational design with real-world experimentation. This convergence could significantly shorten development timelines and expand access to advanced research tools across the life sciences ecosystem.

AI & Machine Learning, News, Research & Innovation
Exit mobile version