U.S. Banks Rush To Fix Vulnerabilities Found By Anthropic Mythos

Major U.S. banks are rapidly patching software vulnerabilities uncovered by Anthropic’s Mythos AI model as concerns grow over AI-driven cybersecurity risks. The system is reportedly identifying weaknesses and attack chains at speeds beyond traditional security workflows.

By Maria Konash Published:
U.S. Banks Rush To Fix Vulnerabilities Found By Anthropic Mythos
U.S. banks speed up software patching after Anthropic’s Mythos AI uncovers widespread cybersecurity vulnerabilities. Image: David Vincent / Unsplash

Major U.S. banks are racing to patch IT system vulnerabilities identified by Anthropic’s powerful Mythos AI model, triggering urgent software upgrades and faster cybersecurity remediation processes across the banking sector.

According to sources familiar with the matter, several of the country’s largest financial institutions currently have access to Claude Mythos Preview through Anthropic’s Project Glasswing initiative. As banks analyze the findings, they are reportedly uncovering large numbers of previously low- or moderate-priority weaknesses that the AI system can chain together into higher-risk attack paths.

The vulnerabilities span both proprietary and open-source software, with older legacy systems drawing particular scrutiny because of outdated software support and slower patching cycles. Multiple sources said banks are now fixing vulnerabilities within days that previously may have remained unresolved for weeks.

The accelerated remediation effort is also creating operational pressure inside financial institutions. Sources said some banks may need to temporarily take systems offline more frequently to implement updates and security fixes, though institutions are attempting to minimize disruption for customers.

“This is a wake-up call because cyber risk is moving to machine speed, while much of bank defense still operates at human speed,” said Nitin Seth, co-founder and CEO of data and AI services firm Incedo.

Mythos has reportedly proven especially effective at identifying complex attack chains by linking together multiple seemingly minor weaknesses into broader exploitable vulnerabilities. One banking source described the system as forcing institutions into remediation timelines “never previously contemplated.”

Access to Mythos remains limited because of both safety concerns and infrastructure costs. Anthropic initially restricted availability to Project Glasswing partners and a small group of additional organizations. Banks reportedly using the system include JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley.

AI-Driven Cybersecurity Changes Banking Operations

The rapid adoption of Mythos highlights how advanced AI systems are beginning to reshape cybersecurity operations inside highly regulated industries.

Unlike conventional vulnerability scanners, Mythos reportedly demonstrates stronger reasoning capabilities capable of connecting isolated weaknesses into realistic attack scenarios. Regulators and cybersecurity experts have increasingly warned that frontier AI systems could dramatically accelerate both cyber defense and cyber offense.

A senior banking regulatory official told Reuters the model had proven “as powerful as anticipated,” particularly in its ability to connect vulnerabilities that human analysts might take far longer to identify.

The pressure is especially acute for banks because financial systems often rely on decades-old infrastructure, proprietary software stacks, and interconnected legacy environments that are difficult to modernize quickly without operational risk.

High Costs Create Uneven Access To Frontier Cyber AI

One major challenge for smaller banks is the cost and infrastructure required to use frontier cybersecurity models effectively.

Anthropic prices Mythos at $25 per million input tokens and $125 per million output tokens, making it significantly more expensive than its widely available Claude Opus 4.7 model. Anthropic has said it will provide $100 million in credits to Project Glasswing participants and Mythos customers to support research-preview usage.

Cybersecurity firms involved in Project Glasswing said the model requires entirely new workflows and methodologies to operate effectively. Adam Meyers of CrowdStrike said his team spent an entire weekend developing processes for using Mythos before actively searching for vulnerabilities.

Anthropic has separately attempted to broaden defensive access through Claude Security and published recommendations for organizations without direct Mythos access. The company has also expanded enterprise cybersecurity offerings through its recently announced financial services AI platform and a separate $1.5 billion AI deployment venture backed by firms including Blackstone and Goldman Sachs aimed at helping organizations operationalize Claude-based systems.

AI & Machine Learning, Cybersecurity & Privacy, Enterprise Tech, News

Google Explores SpaceX Deal For Orbital Data Centers

Google is reportedly in talks with SpaceX and other launch providers as it explores deploying orbital data centers under its Project Suncatcher initiative. The discussions reflect growing interest in space-based AI infrastructure and computing capacity.

By Olivia Grant Edited by Maria Konash Published:
Google Explores SpaceX Deal For Orbital Data Centers
Google explores SpaceX launch deal for orbital AI data centers as Project Suncatcher targets 2027 prototypes. Image: ActionVance / Unsplash

Google is reportedly in talks with SpaceX over a potential rocket launch agreement tied to the company’s efforts to develop orbital data centers, according to a Wall Street Journal report citing people familiar with the discussions.

The report said Google is also holding conversations with other rocket-launch providers as it evaluates infrastructure options for deploying computing systems in space. The initiative is connected to Google’s previously disclosed Project Suncatcher program, which aims to research space-based data center technology and launch two prototype satellites by early 2027.

Project Suncatcher was first revealed in November as part of Google’s long-term exploration of alternative AI infrastructure systems. The project focuses on whether orbital computing platforms could eventually help address growing energy, cooling, and land constraints associated with terrestrial AI data centers.

A partnership with SpaceX would mark another instance of Elon Musk cooperating commercially with AI rivals he has publicly criticized in the past. Musk has repeatedly attacked Google’s AI strategy while simultaneously expanding his own AI infrastructure ambitions through xAI and SpaceX.

Space-Based Computing Gains Attention In AI Industry

The idea of orbital data centers has shifted from theoretical research toward early-stage infrastructure planning as AI companies search for ways to overcome physical limitations facing existing compute expansion.

Space-based infrastructure offers several potential advantages, including access to uninterrupted solar energy, reduced land and cooling constraints, and theoretically massive long-term compute scalability if launch costs continue declining.

However, major technical challenges remain, including radiation exposure, hardware reliability, maintenance logistics, latency management, and the economics of deploying large-scale compute systems into orbit.

AI Infrastructure Race Expands Beyond Earth

Competition in artificial intelligence is expanding into infrastructure ownership and compute deployment strategy rather than focusing solely on model development.

Last week, Anthropic signed an agreement to access the full compute capacity of SpaceXAI’s Colossus 1 supercomputer facility in Memphis, adding more than 220,000 NVIDIA GPUs to support Claude training and inference workloads. The partnership also included discussions around developing multiple gigawatts of orbital compute infrastructure.

The move followed Musk’s decision to merge xAI directly into SpaceX under a new SpaceXAI structure combining AI models, compute infrastructure, and aerospace operations into a single organization. Analysts said the consolidation could give SpaceXAI a unique advantage if orbital AI infrastructure becomes commercially feasible in the coming years.

AI & Machine Learning, Cloud & Infrastructure, News

OpenAI Introduces Daybreak in Response to Anthropic’s Mythos Push

OpenAI has introduced Daybreak, a cybersecurity initiative designed to integrate AI-driven defense directly into software development workflows. The platform combines GPT-5.5 models, Codex Security, and partnerships with major security firms to automate vulnerability analysis and remediation.

By Marcus Lee Edited by Maria Konash Published:
OpenAI Introduces Daybreak in Response to Anthropic’s Mythos Push
OpenAI launches Daybreak with GPT-5.5 and Codex Security to automate vulnerability detection and patching. Image: OpenAI

OpenAI has launched Daybreak, a cybersecurity initiative aimed at embedding AI-driven defense directly into software development and security operations workflows. The company said the platform combines its GPT-5.5 models, the Codex Security agent framework, and partnerships with major cybersecurity firms to help organizations identify, validate, and remediate vulnerabilities faster.

OpenAI described Daybreak as a system designed to move cybersecurity “from discovery to remediation” while integrating defensive intelligence into the software development process itself. Rather than focusing solely on finding vulnerabilities after deployment, the initiative aims to make software “resilient by design.”

The platform uses multiple AI models depending on workflow sensitivity. GPT-5.5 will support general development and analysis tasks, while GPT-5.5 with Trusted Access for Cyber is intended for verified defensive security operations such as secure code review, malware analysis, vulnerability triage, patch validation, and detection engineering.

OpenAI also introduced GPT-5.5-Cyber, a more permissive version intended for specialized authorized workflows including penetration testing, controlled validation, and red teaming activities under stricter verification and account-level controls.

At the center of the initiative is Codex Security, an agentic cybersecurity system capable of scanning repositories, building editable threat models, identifying realistic attack paths, validating high-risk findings, generating patches, and testing fixes directly inside codebases.

In one demonstration, OpenAI showed Codex Security scanning a software repository, prioritizing exploitable vulnerabilities, generating remediation patches, and returning audit-ready evidence documenting the fixes.

The company said Daybreak is designed to reduce vulnerability analysis workflows from hours to minutes while improving prioritization of high-impact security issues and lowering token usage costs during large-scale code analysis.

OpenAI Expands Its Cybersecurity Push

The launch positions OpenAI more directly against Anthropic in the growing market for AI-driven cybersecurity systems.

Anthropic’s Claude Mythos Preview model previously drew attention after reportedly helping identify and patch 271 vulnerabilities in the Firefox browser alone. That announcement intensified concerns in Washington and across the cybersecurity industry about increasingly capable AI systems discovering exploitable software weaknesses faster than organizations can fix them.

Unlike some AI-assisted security tools focused primarily on vulnerability detection, OpenAI said Daybreak is intended to integrate remediation directly into development pipelines through continuous patch validation, secure code review, and automated remediation workflows.

The company emphasized that stronger cyber capabilities also require stricter safeguards. OpenAI said Daybreak combines expanded defensive capabilities with verification systems, monitoring controls, proportional safeguards, and accountability mechanisms intended to limit misuse.

Security Firms And Governments Prepare For AI-Native Defense

OpenAI is launching Daybreak alongside partnerships with several major cybersecurity and infrastructure companies, including Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle, Akamai Technologies, Fortinet, and Zscaler.

“We’re excited about the potential of OpenAI’s cyber capabilities to bring stronger reasoning and more agentic execution into security workflows,” said Cloudflare CTO Dane Knecht. “It’s a big step forward for teams to be able to leverage frontier models not only to accelerate velocity, but also to improve their security posture.”

The initiative also comes as governments and regulators increasingly focus on AI-powered cyber capabilities following warnings around advanced systems such as Anthropic’s Mythos. Earlier this year, OpenAI separately announced plans to provide European institutions with access to GPT-5.5-Cyber under its broader EU Cyber Action Plan as policymakers intensify oversight of frontier AI security models.

SoftBank Injects $457 Million Into British AI Chipmaker

SoftBank has invested more than $450 million into Graphcore as the Japanese technology group expands its AI infrastructure and semiconductor ambitions. The funding follows SoftBank’s acquisition of the British AI chip company in 2024.

By Olivia Grant Edited by Maria Konash Published:
SoftBank Injects $457 Million Into British AI Chipmaker
SoftBank invests $457M in Graphcore to expand AI chip and infrastructure efforts. Image: Vishnu Mohanan / Unsplash

SoftBank Group has injected more than $450 million into British AI chip company Graphcore as the Japanese technology conglomerate accelerates investments in artificial intelligence infrastructure and semiconductor development.

According to a filing with the UK’s Companies House, Graphcore issued a single share valued at approximately $457 million on April 10. A Graphcore spokesperson confirmed the funding came from SoftBank. Sources familiar with the arrangement told CNBC the investment represents only part of the capital Graphcore is expected to receive from SoftBank this year.

SoftBank acquired Graphcore in 2024 after the UK startup struggled to compete commercially against dominant AI chip suppliers such as Nvidia. Before the acquisition, Graphcore had raised hundreds of millions of dollars and was once positioned as a potential challenger in the rapidly expanding AI accelerator market.

At the time of the acquisition, SoftBank said Graphcore would help support its broader ambitions around artificial general intelligence development. The company has since become part of SoftBank’s growing portfolio of AI infrastructure and semiconductor assets.

The new funding comes as SoftBank sharply increases spending across AI hardware, compute infrastructure, and data center projects. The company is involved in the $500 billion Stargate AI infrastructure initiative alongside OpenAI and Oracle, while also pursuing additional semiconductor and robotics investments globally.

SoftBank founder and CEO Masayoshi Son previously described Graphcore as “a company with deep expertise in chip design,” adding that the acquisition strengthened SoftBank’s semiconductor strategy alongside chip architecture company Arm Holdings.

Graphcore has also expanded internationally since the acquisition. In October, the company announced plans to invest up to £1 billion into a new AI campus in Bengaluru, India, focused on AI, silicon engineering, software, and systems development.

SoftBank Expands Its AI Infrastructure Strategy

The Graphcore funding highlights SoftBank’s broader effort to build an integrated AI infrastructure ecosystem spanning semiconductors, compute, robotics, and large-scale data centers.

Over the past two years, SoftBank has aggressively repositioned itself around AI after previously focusing heavily on venture capital investments through the Vision Fund. The company has since shifted toward owning strategic infrastructure assets directly involved in AI model training and deployment.

In addition to Graphcore and Arm, SoftBank also acquired silicon design company Ampere Computing in 2025. Reports have additionally indicated the company is exploring major AI data center projects in Europe, including a potential $100 billion investment in AI infrastructure in France following discussions with Emmanuel Macron, while also considering a standalone AI and robotics business listing in the United States.

Competition For AI Chips Intensifies

The investment also reflects increasing competition in AI semiconductors as companies seek alternatives to Nvidia’s dominant position in the market for AI accelerators.

While Graphcore struggled to achieve broad commercial adoption independently, SoftBank appears to view the company’s chip architecture and engineering expertise as strategically valuable for future AI systems and infrastructure deployments.

Demand for AI compute hardware has surged globally alongside the rapid expansion of generative AI models and large-scale enterprise AI workloads. That growth has pushed technology companies and investors to secure access not only to chips, but also to energy, networking infrastructure, manufacturing capacity, and advanced semiconductor design talent.

For SoftBank, strengthening Graphcore may provide another pathway to participate directly in the long-term buildout of AI infrastructure rather than relying solely on minority investments in external AI companies.

AI & Machine Learning, Cloud & Infrastructure, News, Startups & Investment

Thinking Machines Introduces AI Models for Live Multimodal Collaboration

Thinking Machines Labs introduced a research preview of “interaction models” designed for continuous real-time collaboration across audio, video, and text. The system combines live multimodal interaction with asynchronous reasoning and tool use.

By Daniel Mercer Edited by Maria Konash Published:

Thinking Machines Labs introduced a research preview of what it calls “interaction models,” a new class of AI systems designed to collaborate with users continuously across audio, video, and text rather than through traditional turn-based prompts.

The company said the models are trained from scratch to support real-time interaction, allowing users and AI systems to speak, interrupt, observe, respond, and work simultaneously. The architecture is built around “micro-turns” that process roughly 200 milliseconds of input and output at a time, enabling continuous two-way interaction instead of waiting for users to finish speaking or typing before responding.

According to Thinking Machines, the system combines a real-time interaction model with a separate asynchronous background model responsible for longer reasoning tasks, tool use, browsing, and workflow execution. The interaction layer remains active throughout the process while integrating results from the background model as they arrive.

The company argued that current AI systems create a “collaboration bottleneck” because most models operate through rigid turn-taking interfaces that limit human involvement during reasoning and execution. Thinking Machines said its approach aims to make AI collaboration function more like natural human conversation.

The research preview demonstrates several capabilities that are difficult to achieve in standard voice assistants or multimodal chat systems. These include simultaneous speech between user and model, proactive verbal and visual interjections, continuous visual monitoring, real-time translation, concurrent tool use during conversations, and direct awareness of elapsed time.

For example, the company showed scenarios where the model corrected spoken language mistakes while users continued speaking, counted physical exercises through live video streams, reacted to coding errors as they appeared onscreen, and performed live multilingual translation without pausing conversations.

Interaction Becomes A Core AI Capability

The announcement reflects a broader shift in AI development toward systems optimized for continuous collaboration rather than isolated prompt-response exchanges.

Most current real-time AI products rely on external orchestration layers such as voice activity detection systems and separate dialogue managers to simulate interactivity. Thinking Machines argues those approaches create limitations because the intelligence governing interruptions, timing, and conversational flow exists outside the model itself.

Instead, the company embedded interaction directly into model training and architecture. That allows responsiveness, interruption handling, simultaneous speaking, and multimodal awareness to improve alongside overall model capability as systems scale.

The architecture also differs from many multimodal systems by minimizing reliance on large standalone audio or video encoders. Audio, video, and text are processed together through shared transformer infrastructure using lightweight embedding layers and early fusion techniques.

Benchmarks Highlight Speed And Responsiveness

Thinking Machines said its TML-Interaction-Small model achieved stronger combined responsiveness and interaction quality than several existing commercial realtime AI systems across internal and public benchmarks.

The company highlighted improvements in latency, interruption handling, simultaneous conversation, proactive responses, and continuous multimodal awareness. Internal evaluations also tested capabilities that many current voice models cannot reliably perform, including reacting to visual changes without explicit prompts and speaking concurrently with users during live tasks.

The released model is currently a 276-billion-parameter mixture-of-experts system with 12 billion active parameters at runtime. Thinking Machines said larger interaction models are already pretrained but remain too computationally expensive for low-latency deployment today.

The company added that future work will focus on longer session memory management, infrastructure optimization, safety research for realtime multimodal interaction, and deeper coordination between interactive and background reasoning systems.

The announcement also follows a recently expanded partnership between NVIDIA and Thinking Machines Labs to deploy next-generation Vera Rubin AI systems for frontier model training.

AI & Machine Learning, News

OpenAI Co-Founder Says Sam Altman Showed ‘Pattern of Lying’

Former OpenAI chief scientist Ilya Sutskever testified that he spent about a year collecting evidence that Sam Altman displayed a “consistent pattern of lying.” The testimony came during the ongoing OpenAI and Elon Musk trial in California.

By Samantha Reed Edited by Maria Konash Published:
OpenAI Co-Founder Says Sam Altman Showed ‘Pattern of Lying’
Ilya Sutskever says Sam Altman showed a “consistent pattern of lying” during the OpenAI leadership dispute and Musk trial. Image: Wesley Tingey / Unsplash

Ilya Sutskever testified in court that he spent roughly a year gathering evidence that Sam Altman displayed a “consistent pattern of lying” before voting to remove him as OpenAI CEO in November 2023.

The testimony came during the third week of the high-profile legal battle between Elon Musk and OpenAI in California federal court. Sutskever confirmed that he had been considering action against Altman for at least a year prior to the board’s decision to temporarily oust him.

According to Sutskever, OpenAI’s board asked him to prepare a document detailing concerns about Altman’s conduct. He testified that the material eventually reached 52 pages and included examples of dishonesty as well as behavior that allegedly involved “undermining and pitting executives against one another.”

Sutskever said he had discussed the possibility of removing Altman with former OpenAI chief technology officer Mira Murati after the two spoke extensively about Altman’s leadership style and internal management.

“His conduct was not conducive to any grand goal,” Sutskever said in court, referring specifically to OpenAI’s mission around safe artificial general intelligence.

Sutskever played a central role in Altman’s brief removal from OpenAI in 2023 while serving on the board. However, he later reversed course and supported Altman’s reinstatement after concerns emerged that the company could fracture or collapse during the leadership crisis.

The testimony also revealed new details about OpenAI’s internal turmoil during that period. Sutskever confirmed that remaining board members discussed a potential merger with rival AI company Anthropic after Altman’s removal. Under the proposal, Anthropic leadership would reportedly have taken control of OpenAI. Sutskever said he was “not excited” about the idea.

He additionally disclosed that his personal stake in OpenAI was valued at approximately $5 billion in November 2025 and around $7 billion currently.

Trial Exposes Internal OpenAI Power Struggles

The testimony provides the clearest public account so far of the internal breakdown that led to Altman’s temporary firing and rapid reinstatement. While the board initially cited communication concerns at the time, Sutskever’s statements suggest the conflict involved longer-running disputes over management style, executive relationships, and governance.

The case has also exposed tensions between OpenAI’s nonprofit governance structure and the enormous commercial value generated by its AI business. OpenAI has raised tens of billions of dollars in investment while simultaneously operating under a nonprofit-controlled structure originally designed to prioritize AI safety and public benefit.

Musk, who co-founded OpenAI before leaving in 2018, argues the company abandoned those principles as it evolved into a highly commercial AI organization closely aligned with Microsoft.

OpenAI Leadership And Governance Face Renewed Scrutiny

The trial has become one of the most consequential legal disputes in the AI industry because it could reshape OpenAI’s governance, ownership structure, and leadership.

Musk is seeking $150 billion in damages to be directed to OpenAI’s nonprofit entity and has asked the court to remove Altman and OpenAI president Greg Brockman from leadership roles.

Earlier in the proceedings, Microsoft CEO Satya Nadella described Microsoft’s investment in OpenAI as a “calculated risk,” emphasizing that the partnership delivered major strategic and marketing advantages.

Sutskever, who left OpenAI in 2024 and later founded Safe Superintelligence, is expected to remain a key figure in the case as the court examines whether OpenAI’s transformation into a commercial AI powerhouse violated commitments made during its founding.

AI & Machine Learning, News, Regulation & Policy