OpenAI Co-Founder Says Sam Altman Showed ‘Pattern of Lying’

Former OpenAI chief scientist Ilya Sutskever testified that he spent about a year collecting evidence that Sam Altman displayed a “consistent pattern of lying.” The testimony came during the ongoing OpenAI and Elon Musk trial in California.

By Samantha Reed Edited by Maria Konash Published:
OpenAI Co-Founder Says Sam Altman Showed ‘Pattern of Lying’
Ilya Sutskever says Sam Altman showed a “consistent pattern of lying” during the OpenAI leadership dispute and Musk trial. Image: Wesley Tingey / Unsplash

Ilya Sutskever testified in court that he spent roughly a year gathering evidence that Sam Altman displayed a “consistent pattern of lying” before voting to remove him as OpenAI CEO in November 2023.

The testimony came during the third week of the high-profile legal battle between Elon Musk and OpenAI in California federal court. Sutskever confirmed that he had been considering action against Altman for at least a year prior to the board’s decision to temporarily oust him.

According to Sutskever, OpenAI’s board asked him to prepare a document detailing concerns about Altman’s conduct. He testified that the material eventually reached 52 pages and included examples of dishonesty as well as behavior that allegedly involved “undermining and pitting executives against one another.”

Sutskever said he had discussed the possibility of removing Altman with former OpenAI chief technology officer Mira Murati after the two spoke extensively about Altman’s leadership style and internal management.

“His conduct was not conducive to any grand goal,” Sutskever said in court, referring specifically to OpenAI’s mission around safe artificial general intelligence.

Sutskever played a central role in Altman’s brief removal from OpenAI in 2023 while serving on the board. However, he later reversed course and supported Altman’s reinstatement after concerns emerged that the company could fracture or collapse during the leadership crisis.

The testimony also revealed new details about OpenAI’s internal turmoil during that period. Sutskever confirmed that remaining board members discussed a potential merger with rival AI company Anthropic after Altman’s removal. Under the proposal, Anthropic leadership would reportedly have taken control of OpenAI. Sutskever said he was “not excited” about the idea.

He additionally disclosed that his personal stake in OpenAI was valued at approximately $5 billion in November 2025 and around $7 billion currently.

Trial Exposes Internal OpenAI Power Struggles

The testimony provides the clearest public account so far of the internal breakdown that led to Altman’s temporary firing and rapid reinstatement. While the board initially cited communication concerns at the time, Sutskever’s statements suggest the conflict involved longer-running disputes over management style, executive relationships, and governance.

The case has also exposed tensions between OpenAI’s nonprofit governance structure and the enormous commercial value generated by its AI business. OpenAI has raised tens of billions of dollars in investment while simultaneously operating under a nonprofit-controlled structure originally designed to prioritize AI safety and public benefit.

Musk, who co-founded OpenAI before leaving in 2018, argues the company abandoned those principles as it evolved into a highly commercial AI organization closely aligned with Microsoft.

OpenAI Leadership And Governance Face Renewed Scrutiny

The trial has become one of the most consequential legal disputes in the AI industry because it could reshape OpenAI’s governance, ownership structure, and leadership.

Musk is seeking $150 billion in damages to be directed to OpenAI’s nonprofit entity and has asked the court to remove Altman and OpenAI president Greg Brockman from leadership roles.

Earlier in the proceedings, Microsoft CEO Satya Nadella described Microsoft’s investment in OpenAI as a “calculated risk,” emphasizing that the partnership delivered major strategic and marketing advantages.

Sutskever, who left OpenAI in 2024 and later founded Safe Superintelligence, is expected to remain a key figure in the case as the court examines whether OpenAI’s transformation into a commercial AI powerhouse violated commitments made during its founding.

AI & Machine Learning, News, Regulation & Policy

OpenAI Introduces Daybreak in Response to Anthropic’s Mythos Push

OpenAI has introduced Daybreak, a cybersecurity initiative designed to integrate AI-driven defense directly into software development workflows. The platform combines GPT-5.5 models, Codex Security, and partnerships with major security firms to automate vulnerability analysis and remediation.

By Marcus Lee Edited by Maria Konash Published:
OpenAI Introduces Daybreak in Response to Anthropic’s Mythos Push
OpenAI launches Daybreak with GPT-5.5 and Codex Security to automate vulnerability detection and patching. Image: OpenAI

OpenAI has launched Daybreak, a cybersecurity initiative aimed at embedding AI-driven defense directly into software development and security operations workflows. The company said the platform combines its GPT-5.5 models, the Codex Security agent framework, and partnerships with major cybersecurity firms to help organizations identify, validate, and remediate vulnerabilities faster.

OpenAI described Daybreak as a system designed to move cybersecurity “from discovery to remediation” while integrating defensive intelligence into the software development process itself. Rather than focusing solely on finding vulnerabilities after deployment, the initiative aims to make software “resilient by design.”

The platform uses multiple AI models depending on workflow sensitivity. GPT-5.5 will support general development and analysis tasks, while GPT-5.5 with Trusted Access for Cyber is intended for verified defensive security operations such as secure code review, malware analysis, vulnerability triage, patch validation, and detection engineering.

OpenAI also introduced GPT-5.5-Cyber, a more permissive version intended for specialized authorized workflows including penetration testing, controlled validation, and red teaming activities under stricter verification and account-level controls.

At the center of the initiative is Codex Security, an agentic cybersecurity system capable of scanning repositories, building editable threat models, identifying realistic attack paths, validating high-risk findings, generating patches, and testing fixes directly inside codebases.

In one demonstration, OpenAI showed Codex Security scanning a software repository, prioritizing exploitable vulnerabilities, generating remediation patches, and returning audit-ready evidence documenting the fixes.

The company said Daybreak is designed to reduce vulnerability analysis workflows from hours to minutes while improving prioritization of high-impact security issues and lowering token usage costs during large-scale code analysis.

OpenAI Expands Its Cybersecurity Push

The launch positions OpenAI more directly against Anthropic in the growing market for AI-driven cybersecurity systems.

Anthropic’s Claude Mythos Preview model previously drew attention after reportedly helping identify and patch 271 vulnerabilities in the Firefox browser alone. That announcement intensified concerns in Washington and across the cybersecurity industry about increasingly capable AI systems discovering exploitable software weaknesses faster than organizations can fix them.

Unlike some AI-assisted security tools focused primarily on vulnerability detection, OpenAI said Daybreak is intended to integrate remediation directly into development pipelines through continuous patch validation, secure code review, and automated remediation workflows.

The company emphasized that stronger cyber capabilities also require stricter safeguards. OpenAI said Daybreak combines expanded defensive capabilities with verification systems, monitoring controls, proportional safeguards, and accountability mechanisms intended to limit misuse.

Security Firms And Governments Prepare For AI-Native Defense

OpenAI is launching Daybreak alongside partnerships with several major cybersecurity and infrastructure companies, including Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle, Akamai Technologies, Fortinet, and Zscaler.

“We’re excited about the potential of OpenAI’s cyber capabilities to bring stronger reasoning and more agentic execution into security workflows,” said Cloudflare CTO Dane Knecht. “It’s a big step forward for teams to be able to leverage frontier models not only to accelerate velocity, but also to improve their security posture.”

The initiative also comes as governments and regulators increasingly focus on AI-powered cyber capabilities following warnings around advanced systems such as Anthropic’s Mythos. Earlier this year, OpenAI separately announced plans to provide European institutions with access to GPT-5.5-Cyber under its broader EU Cyber Action Plan as policymakers intensify oversight of frontier AI security models.

SoftBank Injects $457 Million Into British AI Chipmaker

SoftBank has invested more than $450 million into Graphcore as the Japanese technology group expands its AI infrastructure and semiconductor ambitions. The funding follows SoftBank’s acquisition of the British AI chip company in 2024.

By Olivia Grant Edited by Maria Konash Published:
SoftBank Injects $457 Million Into British AI Chipmaker
SoftBank invests $457M in Graphcore to expand AI chip and infrastructure efforts. Image: Vishnu Mohanan / Unsplash

SoftBank Group has injected more than $450 million into British AI chip company Graphcore as the Japanese technology conglomerate accelerates investments in artificial intelligence infrastructure and semiconductor development.

According to a filing with the UK’s Companies House, Graphcore issued a single share valued at approximately $457 million on April 10. A Graphcore spokesperson confirmed the funding came from SoftBank. Sources familiar with the arrangement told CNBC the investment represents only part of the capital Graphcore is expected to receive from SoftBank this year.

SoftBank acquired Graphcore in 2024 after the UK startup struggled to compete commercially against dominant AI chip suppliers such as Nvidia. Before the acquisition, Graphcore had raised hundreds of millions of dollars and was once positioned as a potential challenger in the rapidly expanding AI accelerator market.

At the time of the acquisition, SoftBank said Graphcore would help support its broader ambitions around artificial general intelligence development. The company has since become part of SoftBank’s growing portfolio of AI infrastructure and semiconductor assets.

The new funding comes as SoftBank sharply increases spending across AI hardware, compute infrastructure, and data center projects. The company is involved in the $500 billion Stargate AI infrastructure initiative alongside OpenAI and Oracle, while also pursuing additional semiconductor and robotics investments globally.

SoftBank founder and CEO Masayoshi Son previously described Graphcore as “a company with deep expertise in chip design,” adding that the acquisition strengthened SoftBank’s semiconductor strategy alongside chip architecture company Arm Holdings.

Graphcore has also expanded internationally since the acquisition. In October, the company announced plans to invest up to £1 billion into a new AI campus in Bengaluru, India, focused on AI, silicon engineering, software, and systems development.

SoftBank Expands Its AI Infrastructure Strategy

The Graphcore funding highlights SoftBank’s broader effort to build an integrated AI infrastructure ecosystem spanning semiconductors, compute, robotics, and large-scale data centers.

Over the past two years, SoftBank has aggressively repositioned itself around AI after previously focusing heavily on venture capital investments through the Vision Fund. The company has since shifted toward owning strategic infrastructure assets directly involved in AI model training and deployment.

In addition to Graphcore and Arm, SoftBank also acquired silicon design company Ampere Computing in 2025. Reports have additionally indicated the company is exploring major AI data center projects in Europe, including a potential $100 billion investment in AI infrastructure in France following discussions with Emmanuel Macron, while also considering a standalone AI and robotics business listing in the United States.

Competition For AI Chips Intensifies

The investment also reflects increasing competition in AI semiconductors as companies seek alternatives to Nvidia’s dominant position in the market for AI accelerators.

While Graphcore struggled to achieve broad commercial adoption independently, SoftBank appears to view the company’s chip architecture and engineering expertise as strategically valuable for future AI systems and infrastructure deployments.

Demand for AI compute hardware has surged globally alongside the rapid expansion of generative AI models and large-scale enterprise AI workloads. That growth has pushed technology companies and investors to secure access not only to chips, but also to energy, networking infrastructure, manufacturing capacity, and advanced semiconductor design talent.

For SoftBank, strengthening Graphcore may provide another pathway to participate directly in the long-term buildout of AI infrastructure rather than relying solely on minority investments in external AI companies.

AI & Machine Learning, Cloud & Infrastructure, News, Startups & Investment

Thinking Machines Introduces AI Models for Live Multimodal Collaboration

Thinking Machines Labs introduced a research preview of “interaction models” designed for continuous real-time collaboration across audio, video, and text. The system combines live multimodal interaction with asynchronous reasoning and tool use.

By Daniel Mercer Edited by Maria Konash Published:

Thinking Machines Labs introduced a research preview of what it calls “interaction models,” a new class of AI systems designed to collaborate with users continuously across audio, video, and text rather than through traditional turn-based prompts.

The company said the models are trained from scratch to support real-time interaction, allowing users and AI systems to speak, interrupt, observe, respond, and work simultaneously. The architecture is built around “micro-turns” that process roughly 200 milliseconds of input and output at a time, enabling continuous two-way interaction instead of waiting for users to finish speaking or typing before responding.

According to Thinking Machines, the system combines a real-time interaction model with a separate asynchronous background model responsible for longer reasoning tasks, tool use, browsing, and workflow execution. The interaction layer remains active throughout the process while integrating results from the background model as they arrive.

The company argued that current AI systems create a “collaboration bottleneck” because most models operate through rigid turn-taking interfaces that limit human involvement during reasoning and execution. Thinking Machines said its approach aims to make AI collaboration function more like natural human conversation.

The research preview demonstrates several capabilities that are difficult to achieve in standard voice assistants or multimodal chat systems. These include simultaneous speech between user and model, proactive verbal and visual interjections, continuous visual monitoring, real-time translation, concurrent tool use during conversations, and direct awareness of elapsed time.

For example, the company showed scenarios where the model corrected spoken language mistakes while users continued speaking, counted physical exercises through live video streams, reacted to coding errors as they appeared onscreen, and performed live multilingual translation without pausing conversations.

Interaction Becomes A Core AI Capability

The announcement reflects a broader shift in AI development toward systems optimized for continuous collaboration rather than isolated prompt-response exchanges.

Most current real-time AI products rely on external orchestration layers such as voice activity detection systems and separate dialogue managers to simulate interactivity. Thinking Machines argues those approaches create limitations because the intelligence governing interruptions, timing, and conversational flow exists outside the model itself.

Instead, the company embedded interaction directly into model training and architecture. That allows responsiveness, interruption handling, simultaneous speaking, and multimodal awareness to improve alongside overall model capability as systems scale.

The architecture also differs from many multimodal systems by minimizing reliance on large standalone audio or video encoders. Audio, video, and text are processed together through shared transformer infrastructure using lightweight embedding layers and early fusion techniques.

Benchmarks Highlight Speed And Responsiveness

Thinking Machines said its TML-Interaction-Small model achieved stronger combined responsiveness and interaction quality than several existing commercial realtime AI systems across internal and public benchmarks.

The company highlighted improvements in latency, interruption handling, simultaneous conversation, proactive responses, and continuous multimodal awareness. Internal evaluations also tested capabilities that many current voice models cannot reliably perform, including reacting to visual changes without explicit prompts and speaking concurrently with users during live tasks.

The released model is currently a 276-billion-parameter mixture-of-experts system with 12 billion active parameters at runtime. Thinking Machines said larger interaction models are already pretrained but remain too computationally expensive for low-latency deployment today.

The company added that future work will focus on longer session memory management, infrastructure optimization, safety research for realtime multimodal interaction, and deeper coordination between interactive and background reasoning systems.

The announcement also follows a recently expanded partnership between NVIDIA and Thinking Machines Labs to deploy next-generation Vera Rubin AI systems for frontier model training.

AI & Machine Learning, News

Google Prepares New Gemini Omni AI Video Generation Model

Google appears to be preparing a new AI video generation model called Gemini Omni inside Gemini. Early tests show improved text rendering, conversational editing, and more realistic generated scenes.

By Daniel Mercer Edited by Maria Konash Published:
Google Prepares New Gemini Omni AI Video Generation Model
Google prepares Gemini Omni video model with conversational editing and improved scene realism inside Gemini. Image: BoliviaInteligente / Unsplash

Google appears to be preparing a new AI video generation model called Gemini Omni, according to early user reports and interface screenshots shared online. The feature surfaced inside Gemini with prompts inviting users to “Create with Gemini Omni,” suggesting Google may unveil the system more broadly at Google I/O 2026.

Google reportedly describes Omni as a “new video generation model” that supports video remixing, conversational editing, templates, and direct scene generation through chat prompts. While the company has not officially announced the model, metadata reportedly suggests Omni is connected to Google’s existing Veo video generation technology.

Early demonstrations indicate the system focuses on improving realism and consistency in generated video. One test generated a scene of a professor explaining trigonometric equations on a chalkboard while maintaining relatively coherent mathematical notation throughout the sequence. Text rendering remains one of the more difficult challenges for AI video systems because letters and equations often distort across moving frames.

Another example recreated the widely used “spaghetti test” benchmark, which many AI developers informally use to evaluate hand movement, object interaction, and eating realism in generated video. The generated clip showed two men seated at an outdoor restaurant eating spaghetti and holding a natural conversation, with fewer visual inconsistencies than earlier-generation AI video models.

The leaked interface also included a dedicated usage tracker for video generation. One tester said two complex prompts consumed roughly 86% of a daily AI Pro usage allowance, indicating that video generation workloads may remain heavily restricted because of their high compute requirements.

Google Pushes Gemini Deeper Into Video Creation

The apparent Omni integration suggests Google is moving video generation directly into Gemini instead of keeping Veo as a separate experimental product. The addition of conversational editing tools points toward a workflow where users can iteratively modify videos through chat rather than repeatedly generating clips from scratch.

That approach could make Gemini more competitive as an end-to-end creative platform combining text, image, audio, and video generation in a single interface. The reported template support also suggests Google may target marketing, education, and content production workflows rather than only experimental consumer use cases.

The stronger handling of written text and scene continuity is particularly notable because those remain major weaknesses across many current AI video systems.

Video AI Competition Accelerates Ahead Of I/O

The leaks arrive as AI companies increasingly compete in generative video infrastructure and creative tools. Video generation has become one of the fastest-growing areas of multimodal AI, though it also remains one of the most computationally expensive.

Google has continued investing heavily in video generation technology through Veo while expanding Gemini into a broader multimodal platform. The timing of the Omni leak, shortly before Google I/O 2026, suggests the company may be preparing a larger announcement around integrated AI media creation.

If launched publicly, Gemini Omni would place Google in more direct competition with other AI video platforms targeting professional content generation, conversational editing, and multimodal creative workflows.

OpenAI Launches $4 Billion Enterprise AI Deployment Venture

OpenAI is creating a new enterprise deployment company backed by more than $4 billion in initial funding and acquiring AI consulting firm Tomoro.

By Maria Konash Published:
OpenAI Launches $4 Billion Enterprise AI Deployment Venture
OpenAI forms a $4B AI deployment firm and acquires Tomoro to expand enterprise implementation services. Image: Levart_Photographer / Unsplash

OpenAI is creating a new enterprise-focused business called OpenAI Deployment Company with more than $4 billion in initial committed investment. The company also announced the acquisition of Tomoro, a consulting firm specializing in enterprise AI implementation, as it accelerates efforts to expand adoption of ChatGPT and other OpenAI systems inside large organizations.

According to OpenAI, the new unit will help companies build and deploy AI systems by embedding specialized engineers and deployment teams directly within customer organizations. These teams will work alongside corporate departments to identify operational areas where AI systems can automate workflows, improve productivity, or support decision-making.

The acquisition of Tomoro will immediately add around 150 AI engineers and deployment specialists to the business. Tomoro was established in 2023 through a partnership aligned with OpenAI and has worked with companies including Mattel, Red Bull, Tesco, and Virgin Atlantic.

OpenAI said the deployment venture is structured as a multi-year partnership between OpenAI and 19 investment firms. The initiative is led by TPG, with Advent International, Bain Capital, and Brookfield Asset Management acting as co-lead founding partners.

The launch comes as OpenAI intensifies its enterprise expansion efforts following widespread consumer adoption of ChatGPT. The company has increasingly focused on securing long-term corporate contracts and integrating AI systems into business operations at scale.

OpenAI Moves Beyond Software Licensing

The creation of OpenAI Deployment Company signals a broader shift in how frontier AI firms are approaching enterprise adoption. Instead of only selling access to AI models through APIs or subscriptions, OpenAI is building a dedicated implementation business designed to help customers operationalize AI across complex organizations.

The strategy reflects a growing reality in enterprise AI: deploying advanced models often requires extensive customization, workflow integration, governance planning, and technical support. Many companies lack internal expertise to manage those processes independently.

By embedding engineers directly inside client organizations, OpenAI is adopting an approach closer to enterprise consulting and systems integration firms than conventional software vendors. The model resembles how companies such as Palantir Technologies work with customers to integrate AI and data systems into operational workflows.

Enterprise AI Services Become A Competitive Battleground

The announcement also highlights increasing competition between leading AI companies for enterprise market share. OpenAI’s expansion comes as Anthropic continues gaining traction with its Claude models across corporate customers.

As previously reported, both OpenAI and Anthropic were exploring acquisitions of AI deployment and consulting firms as part of broader enterprise strategies. The sector has become strategically important because large organizations often require ongoing implementation support rather than standalone model access.

The new venture also gives OpenAI a larger operational footprint inside customer environments, potentially strengthening long-term relationships and increasing dependence on its infrastructure and models.