OpenAI Launches Codex for Windows

OpenAI has released the Windows version of its Codex agentic coding application following strong adoption on Mac. The tool brings AI-driven coding agents and native Windows workflows to more developers.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI Launches Codex for Windows
OpenAI launches Codex for Windows. Photo: OpenAI

OpenAI has released a Windows version of its Codex agentic coding application, expanding the reach of the AI powered development tool beyond its initial Mac launch earlier this year. The company said the Windows release was designed specifically for native Windows developer environments rather than simply adapting the Mac version.

Codex has seen strong adoption since its debut. According to OpenAI, the Mac version surpassed one million downloads within its first week and now has about 1.6 million weekly active users. Interest in the Windows release also appears significant, with more than 500,000 developers joining the waitlist ahead of the launch.

The application acts as an AI driven development assistant that manages coding agents capable of performing tasks such as generating code, modifying repositories, and automating workflows. OpenAI describes the interface as a command center for agents rather than a traditional coding environment. Developers can monitor agent activity, review code changes, and switch to external development tools when needed.

Designed for Native Windows Workflows

OpenAI said the Windows version was built to integrate with existing Windows developer workflows. The app includes native sandboxing capabilities that allow agents to operate securely within Windows environments while limiting system access.

By default, Codex uses the built in Windows sandbox and applies operating system level security controls such as restricted tokens and file system access management. These safeguards allow AI agents to run commands within environments like PowerShell, Microsoft’s default command shell.

Developers can also configure Codex to run within the Windows Subsystem for Linux environment, enabling access to Linux based developer tools while maintaining the application’s AI agent capabilities.

The Windows release largely mirrors the Mac version in terms of features. It supports automated coding skills, repository worktrees, and the ability to manage multiple tasks through AI agents. Some Windows specific capabilities are included as well, such as support for WinUI development workflows used in Windows application development.

AI Models and Developer Access

Codex for Windows runs primarily on OpenAI’s GPT-5.3-Codex model, which is optimized for software development tasks. Developers can switch between other models depending on the requirements of a task, including GPT-5.2-Codex, GPT-5.1-Codex-Max, and GPT-5.1-Codex-Mini for faster operations. Users can also adjust the reasoning level applied by the model.

The Windows version is available to all ChatGPT users across Free, Go, Plus, Pro, Business, Enterprise, and Education tiers.

AI & Machine Learning, News

Google Expands AI Mode Canvas to All US Search Users

Google has rolled out Canvas in AI Mode to all U.S. users in English, enabling people to create documents, dashboards, and interactive tools directly within Search. The feature provides a dynamic workspace for planning projects and building simple applications.

By Daniel Mercer Edited by Maria Konash Published:
Google launches Canvas in AI Mode for U.S. Search, enabling interactive documents, dashboards, and coding tools directly in AI results. Photo: Google

Google has expanded its Canvas feature within AI Mode in Search to all users in the United States using English. The feature introduces a dedicated workspace that allows users to create documents, build tools, and organize projects directly inside Google’s AI powered search interface.

Canvas functions as a dynamic side panel where users can draft content, develop interactive tools, and manage ongoing projects. The workspace integrates live information from the web and Google’s Knowledge Graph to populate projects with updated data.

The rollout marks another step in Google’s effort to transform search from a traditional query interface into a broader productivity environment powered by generative AI.

Users can access Canvas through the AI Mode tool menu and describe the project or tool they want to create. The system then generates a working prototype inside the Canvas panel, which can be edited and refined through conversational prompts.

Interactive Tools and Coding Capabilities

The updated Canvas environment includes expanded capabilities for both creative writing and coding tasks. Users can draft documents, create dashboards, or build lightweight applications without leaving the Search interface.

One example shared by Google involved an academic scholarship dashboard that aggregates application requirements, deadlines, and award amounts into a single interactive tool. Similar use cases could include study planners, travel itineraries, or data tracking dashboards.

For more advanced users, Canvas also allows access to the underlying code powering generated tools. Developers can view and modify the code directly, enabling customization or further refinement of generated applications.

Google said the feature was designed to support iterative development. After generating an initial prototype, users can test functionality and request changes through follow up prompts until the tool or document meets their needs.

AI Search as a Productivity Platform

The introduction of Canvas reflects a broader shift in how technology companies are positioning AI assisted search as a platform for creation rather than just information retrieval.

By integrating writing, coding, and project management capabilities into the search experience, Google is competing more directly with AI assistants that function as productivity tools.

The move also aligns with the industry trend toward embedding generative AI into everyday workflows. Rather than requiring separate applications for coding, note taking, or research, platforms like Canvas aim to consolidate those tasks within a single interface powered by AI models and real time web data.

AI & Machine Learning, Consumer Tech, News

Anthropic CEO Criticizes OpenAI Defense Deal as “Safety Theater”

Anthropic CEO Dario Amodei criticized OpenAI’s Pentagon agreement in a memo to staff, calling the company’s safety commitments “safety theater.” The remarks highlight deepening tensions between AI firms over military use of their technology.

By Samantha Reed Edited by Maria Konash Published:
Anthropic CEO Dario Amodei calls OpenAI’s Pentagon deal “safety theater,” intensifying the debate over military AI safeguards. Photo: Rob Laughter / Unsplash

Anthropic Chief Executive Dario Amodei has criticized OpenAI’s agreement with the U.S. Department of Defense, describing the company’s safety commitments as “safety theater” in a memo to employees. The comments, reported by The Information, reflect escalating tensions between leading AI developers over how their technology should be used in military contexts.

The dispute follows failed negotiations between Anthropic and the Pentagon over the military’s access to the company’s AI systems. Anthropic had previously secured a $200 million contract with the Department of Defense but declined to expand the partnership after the agency requested broader access to its technology.

According to people familiar with the talks, Anthropic asked the government to formally confirm that its models would not be used to enable mass domestic surveillance of U.S. citizens or to power fully autonomous weapons systems.

When the companies could not reach an agreement, the Pentagon instead signed a deal with OpenAI to deploy its AI models across defense infrastructure.

Debate Over Safeguards and “Lawful Use”

OpenAI said its contract with the Defense Department includes safeguards that align with similar restrictions proposed by Anthropic. The company stated that its systems would not be intentionally used for domestic surveillance of U.S. persons and that the agreement explicitly acknowledges those limitations.

However, Amodei argued in his memo that OpenAI’s messaging misrepresents the situation. He wrote that OpenAI accepted the agreement largely to avoid internal employee pushback rather than to enforce meaningful safeguards.

Amodei also criticized the contract language allowing AI systems to be used for “all lawful purposes,” a phrase Anthropic had rejected during negotiations. Critics have pointed out that legal frameworks can evolve, meaning activities considered unlawful today could become permissible in the future.

OpenAI responded in a blog post that the Defense Department has stated it does not intend to deploy AI for mass surveillance of Americans or for fully autonomous weapons systems.

Industry and Public Reaction

The dispute has become one of the most visible public disagreements among leading AI companies over military deployments of generative AI technologies.

Amodei suggested in his internal message that public sentiment may be shifting in Anthropic’s favor. Data from market intelligence firms showed a sharp increase in ChatGPT app uninstallations after OpenAI announced its Pentagon agreement, while Anthropic’s Claude application climbed in download rankings.

The controversy also unfolds as OpenAI explores further defense partnerships beyond the United States. The company is considering a deal to deploy its AI models across NATO’s unclassified networks following its Pentagon agreement, highlighting how Western defense organizations are increasingly integrating generative AI systems even as debates intensify within the industry over safeguards and governance.

AI & Machine Learning, News

Nvidia Says $100B Investment Into OpenAI Is Likely Off the Table

Nvidia CEO Jensen Huang said the company’s $30 billion investment in OpenAI could be its last before the AI startup pursues an initial public offering. The chipmaker also indicated its $10 billion investment in Anthropic may mark the end of its funding commitments to major AI model developers.

By Samantha Reed Edited by Maria Konash Published:
Nvidia CEO says the company’s latest OpenAI investment may be its last before a potential IPO. Photo: BoliviaInteligente / Unsplash

Nvidia may be nearing the end of its large scale investments in leading artificial intelligence startups. Chief Executive Jensen Huang said the company’s recent $30 billion investment in OpenAI could be its final funding commitment to the ChatGPT developer before a possible public listing.

Speaking at the Morgan Stanley Technology, Media and Telecom Conference, Huang said OpenAI is expected to pursue an initial public offering toward the end of the year. That prospect could limit further private investments from partners such as Nvidia.

“The reason for that is because they’re going to go public,” Huang said, referring to OpenAI’s future funding plans.

Huang also indicated that Nvidia’s $10 billion investment in OpenAI rival Anthropic will likely be its last funding commitment to that company as well. Nvidia previously announced plans to invest in Anthropic alongside Microsoft in late 2025.

The comments highlight Nvidia’s central role in the rapidly expanding artificial intelligence industry, where the company’s graphics processing units power many of the most advanced AI models. By investing directly in leading AI developers, Nvidia has sought to strengthen partnerships with companies that rely heavily on its chips to train and run large language models.

OpenAI Funding and Partnership Dynamics

Nvidia’s $30 billion investment in OpenAI was disclosed as part of a $110 billion funding round announced by the company last week. The round included a $50 billion commitment from Amazon and a $30 billion investment from SoftBank, marking one of the largest private funding rounds in the technology sector.

The financing places OpenAI among the most valuable private technology companies in the world and reflects continued investor enthusiasm for artificial intelligence infrastructure and applications.

Earlier discussions between Nvidia and OpenAI had suggested a much larger investment framework. In September, the two companies referenced a potential $100 billion investment tied to a broader AI infrastructure initiative. However, Huang said that scale of investment is unlikely to move forward.

“The opportunity to invest $100 billion is probably not in the cards,” he said during the conference appearance.

Nvidia had previously signaled uncertainty about the agreement. In a quarterly filing released in November, the company noted that the proposed investment and partnership structure with OpenAI might not ultimately be completed. Similar language appeared in its February filing, stating there was no assurance that a final transaction would take place.

Strategic Implications for the AI Ecosystem

The comments illustrate how Nvidia is balancing its position as both a supplier of critical AI hardware and a strategic investor in companies building large scale AI systems.

While investments in OpenAI and Anthropic have helped strengthen Nvidia’s ties with leading AI developers, the company’s long term strategy appears increasingly focused on expanding its hardware and infrastructure platform rather than continuing large equity investments in model developers.

OpenAI Developing GitHub Rival for Code Hosting

OpenAI is reportedly developing a new code hosting platform that could compete with Microsoft-owned GitHub. The project follows service disruptions affecting GitHub and reflects OpenAI’s expanding developer platform strategy.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI is reportedly developing a GitHub rival as it expands its developer tools and AI infrastructure beyond ChatGPT. Photo: Rubaitul Azad / Unsplash

OpenAI is developing a new code hosting platform that could compete directly with GitHub, according to a report by The Information citing a person familiar with the project. The initiative is currently in early development and may take several months before a working product is completed.

The proposed platform would allow developers to host and manage software repositories, a service currently dominated by Microsoft-owned GitHub. Engineers at OpenAI reportedly began exploring the project after experiencing repeated service disruptions that temporarily made GitHub unavailable in recent months.

The new system could eventually be offered to OpenAI’s existing developer and enterprise customer base, according to the report. Such a move would extend the company’s reach beyond AI models and APIs into core software development infrastructure.

Competition With a Major Partner

If the platform is commercialized, it would represent a notable competitive step by OpenAI against Microsoft, one of its largest investors and strategic partners. Microsoft currently owns GitHub and integrates OpenAI models across several of its software products, including developer tools and enterprise platforms.

The move would highlight the evolving relationship between AI model providers and traditional software platforms. As AI systems increasingly generate, analyze, and modify code, companies are beginning to build integrated ecosystems that combine model access with developer infrastructure.

OpenAI has already expanded its presence in the developer ecosystem through tools such as its API platform and coding-focused AI assistants. A dedicated repository platform could allow the company to more closely integrate code generation, version control, and AI-assisted development workflows.

The reported project comes amid continued growth and investment in OpenAI. The company’s latest funding round reportedly valued it at about $8730 billion after raising roughly $110 billion from investors including SoftBank and major technology firms.

The expansion also reflects intensifying competition across the AI industry as companies seek to control the platforms developers use to build applications. OpenAI has recently deepened its involvement with government technology infrastructure, including agreements to deploy its models within U.S. defense networks and ongoing discussions about deploying AI systems across NATO’s unclassified networks.

AI & Machine Learning, News

Anthropic Nears $20B Revenue Even as US Flags Supply Chain Risks

Anthropic’s annualized revenue has surged to nearly $20 billion even as the U.S. government classifies the company as a supply chain risk. The AI firm plans to challenge the designation while demand for its Claude models continues to grow.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic’s revenue nears a $20B run rate as Claude adoption surges, despite a U.S. supply-chain risk designation. Photo: Anthropic

Anthropic is experiencing rapid revenue expansion even as it faces a political and regulatory dispute with the U.S. government. The artificial intelligence company has increased its annualized revenue run rate to more than $19 billion, more than doubling from roughly $9 billion at the end of 2025.

The growth marks a sharp rise from around $14 billion reported only weeks earlier. The surge has been driven by strong adoption of Anthropic’s AI models and developer tools, particularly the programming-focused product Claude Code.

Anthropic, which was recently valued at about $380 billion, has also gained traction among individual users. Its Claude mobile app recently climbed to the top of Apple’s U.S. App Store rankings as debate intensified over AI partnerships with the Pentagon. The ranking shift came as some users reacted to rival OpenAI’s defense agreement by uninstalling ChatGPT and switching platforms.

The company has also expanded its product portfolio with tools such as Claude Cowork, which integrates AI into collaborative software workflows. The launch of these products has disrupted segments of the software-as-a-service market, contributing to volatility among some SaaS company stocks.

Pentagon Conflict and Supply Chain Classification

Despite strong commercial momentum, Anthropic is facing growing pressure from the U.S. Department of Defense. Defense Secretary Pete Hegseth recently designated the company as a supply chain risk, a classification typically reserved for firms linked to geopolitical adversaries.

The designation followed months of negotiations between Anthropic and the Pentagon over how its AI systems could be used by military and intelligence agencies.

Anthropic insisted on maintaining safeguards preventing two specific applications: mass domestic surveillance of U.S. citizens and the use of AI systems in fully autonomous weapons. The company has argued that current AI models are not reliable enough to safely operate without human oversight and that large-scale surveillance of Americans would violate fundamental rights.

U.S. defense officials have previously said the military has no intention of deploying AI for mass surveillance or autonomous weapons but has argued that lawful uses of AI should remain unrestricted.

Legal Challenge and Industry Implications

Anthropic has described the supply chain risk designation as legally unsound and signaled it will challenge the move in court if necessary. The company argues the authority cited by the Defense Department applies only to contracts directly involving the Pentagon and should not restrict broader commercial use of its technology.

Under the company’s interpretation, the designation would not affect private customers or contractors using Claude outside Department of Defense agreements. Anthropic also says it should not restrict how defense contractors deploy the model in non-Pentagon projects.

Industry observers say the dispute highlights tensions between AI safety policies and national security priorities as governments accelerate adoption of generative AI tools.

Despite the political standoff, Anthropic’s commercial business continues to expand rapidly. Strong enterprise demand for coding and productivity tools built on Claude has driven revenue growth while consumer adoption has surged, reflected in the app’s recent rise to the No. 1 position on the U.S. App Store amid backlash over competing defense AI partnerships. Meanwhile, OpenAI is considering a deal to deploy its AI models across NATO’s unclassified networks following its Pentagon agreement, underscoring how defense alliances are rapidly integrating generative AI technologies even as debates over safeguards and governance continue.

AI & Machine Learning, News
Exit mobile version