Alphabet CEO Eyes Bigger AI Investments as Opportunities Grow

Alphabet is ramping up large-scale AI investments as its early bet on SpaceX approaches a potential $100 billion return. CEO Sundar Pichai says the AI boom is creating new opportunities.

By Samantha Reed Edited by Maria Konash Published: Updated:
Alphabet CEO Eyes Bigger AI Investments as Opportunities Grow
Alphabet CEO eyes bigger AI investments as valuations surge, with Pichai signaling new growth opportunities. Image: Stripe

Alphabet is preparing to expand its direct investments in artificial intelligence startups, buoyed by massive gains from earlier bets such as SpaceX. CEO Sundar Pichai said the company sees a growing number of opportunities to deploy capital as AI reshapes the technology landscape. His comments come as Alphabet’s 2015 investment in SpaceX could be worth around $100 billion, depending on future valuation milestones tied to a potential IPO.

Speaking in a conversation published Tuesday, Pichai pointed to SpaceX and Anthropic as examples of how early investments can scale alongside major technology shifts. Alphabet initially invested $900 million in SpaceX at a $12 billion valuation. Following a merger between SpaceX and xAI earlier this year, the combined entity has been valued as high as $1.25 trillion, with reports suggesting a future IPO could target $1.75 trillion. If Alphabet has maintained its stake, the return would rank among the most successful venture-style investments in the company’s history.

The company is now adapting its investment strategy to match the scale of the AI boom. Rather than relying solely on its venture arms GV and CapitalG, Alphabet is increasingly deploying capital directly from its balance sheet. This approach mirrors moves by other major technology firms, including Nvidia, Microsoft, and Amazon, as AI startups require significantly larger funding rounds than traditional venture deals.

Anthropic illustrates this shift. Alphabet invested $300 million in the AI startup in 2023, followed by an additional $2 billion later that year. Its total investment now exceeds $3 billion, with a reported ownership stake of about 14%. Over the same period, Anthropic’s valuation has surged to roughly $380 billion, reflecting rapid growth in demand for generative AI systems. The partnership also has strategic value, as Anthropic relies on Google’s cloud infrastructure and tensor processing units to run its models.

From Venture Bets to Strategic Capital

Pichai’s comments suggest Alphabet is moving beyond passive venture investing toward a more strategic model tied closely to its core business. Large AI investments can drive demand for its cloud services, custom chips, and infrastructure, creating a feedback loop between capital deployment and product growth.

This shift also reflects lessons from past investments. Pichai noted that Alphabet could have invested more heavily in its own autonomous vehicle unit, Waymo, at earlier stages. Waymo has since raised significant external funding, including a $16 billion round this year that valued the company at $126 billion.

A New Era of Mega Investments

The scale of current AI funding rounds is reshaping how tech giants allocate capital. Companies are increasingly making multi-billion-dollar investments to secure strategic partnerships and infrastructure demand, rather than pursuing smaller, diversified venture portfolios.

Alphabet’s experience with companies like Stripe, where early investments have grown significantly in value, reinforces the potential upside of this approach. But the AI era is raising the stakes, with fewer deals and much larger check sizes.

As competition intensifies, Alphabet’s willingness to invest aggressively could determine its position in the next phase of AI development, where capital, infrastructure, and partnerships are becoming as critical as the technology itself.

OpenAI Launches $100 ChatGPT Pro Tier to Boost Codex Usage

OpenAI has introduced a $100 ChatGPT Pro tier offering 5x higher Codex usage limits, targeting developers amid rising competition in AI coding tools.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI Launches $100 ChatGPT Pro Tier to Boost Codex Usage
OpenAI unveils $100 ChatGPT Pro tier, expanding Codex access to boost developer adoption. Image: Emiliano Vittoriosi / Unsplash

OpenAI has introduced a new $100-per-month ChatGPT Pro subscription tier aimed at developers, offering significantly expanded usage limits for its Codex coding tools. The move adds a mid-range option between its $20 Plus plan and $200 Pro tier, as the company looks to attract more “vibe coders” and professional developers amid intensifying competition in AI-powered software development.

The primary appeal of the new plan is a fivefold increase in Codex usage compared to the Plus tier. Codex, OpenAI’s agentic coding system, allows users to generate, review, and execute code using natural language. Under the new pricing structure, Pro users receive substantially higher limits across both local and cloud-based tasks within rolling five-hour windows. For example, GPT-5.3-Codex usage increases from 30–150 local messages and 10–60 cloud tasks on Plus to 300–1,500 local messages and 100–600 cloud tasks on the $100 plan.

OpenAI said the new tier reflects strong demand from developers who need higher throughput for longer or more complex coding sessions. CEO Sam Altman noted the plan was introduced in response to user feedback. At the same time, the company is adjusting usage patterns on the Plus tier, shifting toward more distributed access throughout the week rather than extended daily sessions. This effectively reduces peak usage flexibility for lower-tier subscribers while encouraging upgrades.

Competing for Developers

The launch comes as competition in AI coding tools intensifies, particularly with Anthropic. Anthropic’s Claude-based coding products have gained traction in enterprise environments, contributing to rapid revenue growth and setting new benchmarks for autonomous coding systems.

OpenAI’s strategy appears designed to counter that momentum. By offering higher usage limits at a mid-tier price point, the company is targeting developers who require more capacity than casual users but may not need the full $200 plan. The move also follows OpenAI’s hiring of Peter Steinberger, creator of the OpenClaw agent framework, signaling a broader push into agent-driven development workflows.

Shifting Economics of AI Coding

The new pricing structure reflects the changing economics of AI-powered development. High-usage customers, particularly those running automated agents or working with large codebases, can quickly exceed the cost assumptions of lower-tier subscriptions.

Anthropic recently tightened restrictions on how its subscription plans can be used with third-party tools, pushing developers toward API-based pricing. OpenAI, by contrast, is positioning its plans to accommodate heavier usage directly within its subscription model.

The introduction of the $100 tier suggests a broader segmentation strategy, with pricing aligned more closely to usage intensity. As AI coding tools become central to software development workflows, companies are increasingly competing not just on model capability, but on pricing flexibility and developer experience.

The move highlights how the AI coding market is evolving into a high-stakes battleground, where access, limits, and economics may be as important as the underlying technology itself.

AI & Machine Learning, News

Will You Be Able to Invest in OpenAI? IPO Plans Suggest Yes

OpenAI to include retail investors in IPO as demand surges, highlighting shift toward broader ownership and funding for massive AI infrastructure plans.

By Samantha Reed Edited by Maria Konash Published:
Will You Be Able to Invest in OpenAI? IPO Plans Suggest Yes
OpenAI plans to reserve IPO shares for retail investors, widening access. Image: Aditya Vyas / Unsplash

OpenAI is planning to allocate a portion of shares to individual investors in its expected initial public offering, marking a notable shift in how major AI companies approach public markets. Chief Financial Officer Sarah Friar said the company saw strong demand from retail participants during its latest private funding round and intends to include them in a future listing. The move comes as OpenAI prepares for what could be one of the largest IPOs in the technology sector.

The company has already tested retail appetite through private placements facilitated by banks including JPMorgan Chase, Morgan Stanley, and Goldman Sachs. OpenAI initially aimed to raise $1 billion from individual investors but ultimately secured roughly three times that amount, underscoring intense interest in the company’s growth. According to Friar, demand was so high that one bank’s systems briefly failed after opening investor access to financial materials.

OpenAI’s valuation has surged alongside this demand. The company was recently valued at $852 billion following a record-breaking funding round, up sharply from earlier estimates. While Friar declined to confirm a specific IPO timeline, she indicated that the company is preparing operationally to function like a public entity. Reports suggest a potential listing could occur as early as the fourth quarter.

Broadening Ownership in AI

The decision to include retail investors reflects a broader effort to democratize access to high-growth AI companies, which have historically been dominated by institutional capital. Friar said widespread ownership is important for building trust, particularly as AI becomes more deeply integrated into everyday life.

The approach echoes strategies used by other high-profile companies. Friar pointed to her experience at Block, as well as examples from Tesla and SpaceX, where retail participation played a role in shaping investor engagement. SpaceX is also expected to reserve a significant portion of shares for individual investors in its anticipated IPO.

Funding the Compute Race

OpenAI’s push toward public markets is closely tied to its capital needs. The company plans to spend as much as $600 billion over the next five years on semiconductors and data centers, reflecting the growing importance of compute infrastructure in AI competition. Friar described compute as the company’s most critical asset, directly tied to product performance and revenue growth.

Enterprise adoption is also accelerating. According to company executives, enterprise customers currently account for about 40% of OpenAI’s revenue and are expected to reach parity with consumer revenue by 2026. The shift is driven by businesses moving beyond basic productivity use cases to deploying AI systems that manage complex workflows and teams of autonomous agents.

The scale of OpenAI’s ambitions highlights a broader trend across the industry, where access to capital and infrastructure is becoming a decisive factor. By opening its IPO to retail investors, the company is not only tapping a new funding source but also signaling a more inclusive approach to ownership in the AI era.

AI & Machine Learning, Enterprise Tech, News, Startups & Investment

OpenAI Limits Access to Cybersecurity AI Model Over Misuse Risks

OpenAI plans to restrict access to a powerful new cybersecurity-focused AI model, reflecting growing concern over misuse as capabilities approach real-world attack potential.

By Marcus Lee Edited by Maria Konash Published:
OpenAI Limits Access to Cybersecurity AI Model Over Misuse Risks
OpenAI limits release of advanced cybersecurity model, citing rising AI attack risks. Image: Adi Goldstein / Unsplash

OpenAI is preparing to limit access to a new artificial intelligence model with advanced cybersecurity capabilities, signaling rising concern among AI developers about the risks of misuse. The model, still in development, is expected to be released only to a small group of vetted organizations, according to reports. The approach mirrors recent moves by Anthropic, which restricted access to its Mythos Preview model due to similar concerns about its ability to identify and exploit software vulnerabilities.

The shift reflects a broader turning point in AI development. Models are increasingly capable of autonomously analyzing code, discovering weaknesses, and even generating exploits. OpenAI has already begun testing controlled access through its “Trusted Access for Cyber” program, launched earlier this year alongside its GPT-5.3-Codex model. The initiative provides selected organizations with access to more advanced and less restricted systems for defensive cybersecurity work, backed by $10 million in API credits.

Security experts say the capabilities now emerging represent a fundamental change in the threat landscape. AI tools that were once limited to assisting developers are now approaching the level of skilled human hackers. This raises the risk that such systems could be used to target critical infrastructure, including energy grids, water systems, and financial networks. Industry leaders warn that the timeline for widespread availability of these capabilities may be measured in months rather than years.

A Shift Toward Controlled Deployment

The decision to restrict access highlights a growing tension between innovation and safety. AI companies are under pressure to advance model capabilities while preventing misuse. Limiting access to trusted partners allows developers to study risks and refine safeguards before broader release.

This approach resembles established practices in cybersecurity, where vulnerabilities are disclosed gradually to allow time for patches before public exposure. Some experts argue that staggered deployment of powerful AI models may become standard as capabilities continue to advance.

At the same time, there are limits to how much control companies can maintain. Researchers note that existing publicly available models are already capable of identifying certain vulnerabilities, suggesting that the underlying capabilities are spreading across the industry.

An Irreversible Turning Point

The move by OpenAI and Anthropic underscores a growing consensus that AI has crossed a critical threshold in cybersecurity. Once these capabilities exist, they cannot easily be contained. Even if leading companies restrict access, similar models are likely to emerge elsewhere.

For enterprises and governments, the implication is clear: defenses must evolve quickly. Organizations may need to adopt AI-driven security tools at scale to keep pace with increasingly automated threats.

While it remains unclear whether OpenAI will eventually release the model more broadly, the current strategy reflects a cautious approach to a rapidly changing risk environment. The balance between openness and control is likely to remain a defining issue as AI systems become more powerful and more widely deployed.

OpenAI Pauses UK Stargate Project Over Energy, Regulation

OpenAI has paused its Stargate AI infrastructure project in the UK, citing high energy costs and regulatory uncertainty. The move raises questions about the country’s AI ambitions.

By Olivia Grant Edited by Maria Konash Published:
OpenAI Pauses UK Stargate Project Over Energy, Regulation
OpenAI halts UK Stargate project over energy and regulatory hurdles, clouding AI infrastructure plans. Image: Aron Van de Pol / Unsplash

OpenAI has paused its planned Stargate artificial intelligence infrastructure project in the United Kingdom, pointing to high energy costs and regulatory uncertainty as key obstacles. The project, announced in 2025 as part of a broader push to expand AI compute capacity in Europe, was expected to deploy thousands of GPUs in partnership with Nscale and NVIDIA. The decision underscores the growing importance of energy pricing and policy clarity in determining where large-scale AI infrastructure is built.

Stargate UK was initially positioned as a cornerstone of the country’s AI strategy. OpenAI had planned to deploy up to 8,000 GPUs in the first phase, with the potential to scale to 31,000 over time. The infrastructure was intended to support advanced AI workloads locally, including applications in public services, finance, and national security. Sites under consideration included locations such as Cobalt Park in northeast England, part of a designated AI growth zone.

However, the economics of the project have become increasingly challenging. Industrial electricity prices in the UK are among the highest globally, and access to grid capacity has been a persistent bottleneck for large data center developments. These factors have made it difficult to justify long-term investment in energy-intensive AI infrastructure. OpenAI said it would revisit the project when conditions improve, suggesting the pause may not be permanent.

Regulation Adds Uncertainty

In addition to cost pressures, regulatory developments have added complexity. UK policymakers are currently debating new rules governing how AI models can use copyrighted content. Proposals to allow broader use of such material have faced strong opposition from the creative industries, leading to delays and reconsideration of the framework.

The uncertainty around future rules creates additional risk for companies planning large infrastructure investments, particularly those tied to training and deploying generative AI systems. OpenAI indicated that clearer regulatory conditions would be necessary before moving forward with Stargate UK.

Implications for UK AI Strategy

The pause raises questions about the UK’s ability to compete in the global race for AI infrastructure. While the government has positioned the country as a potential leader in AI, the combination of high energy costs and evolving regulation may push companies to invest elsewhere.

Despite the setback, OpenAI said it remains committed to the UK market. The company continues to invest in local talent and maintain its research presence in London, while working with the government under a previously signed agreement to support AI adoption in public services.

AI & Machine Learning, Cloud & Infrastructure, News

Anthropic Introduces Managed Agents to Scale Long-Running AI Tasks

Anthropic has launched Managed Agents, a new system designed to run long-horizon AI tasks by separating reasoning, execution, and memory layers. The approach aims to improve reliability and scalability.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic Introduces Managed Agents to Scale Long-Running AI Tasks
Anthropic launches Managed Agents, separating reasoning from execution for scalable, long-running AI tasks. Image: Anthropic

Anthropic has introduced Managed Agents, a new system architecture designed to support long-running AI tasks by separating core components of agent behavior.

The approach decouples what the company describes as the “brain” of an AI system from its execution environment and memory, allowing each layer to operate independently. The system is now available as part of Anthropic’s Claude platform and is aimed at developers building complex, multi-step AI workflows.

The architecture breaks AI agents into three main components: the session, which stores a durable log of events; the harness, which orchestrates model calls and tool usage; and the sandbox, where code execution and external actions take place. By separating these layers, Anthropic aims to avoid a common problem in AI systems where tightly coupled infrastructure becomes fragile as models evolve. Earlier designs placed all components in a single environment, making failures harder to diagnose and recover from.

One key motivation behind the redesign is the rapid improvement of AI models themselves. Anthropic noted that assumptions embedded in earlier systems, such as workarounds for model limitations, can quickly become outdated. For example, previous models required interventions to prevent premature task completion due to context limits, but newer models no longer exhibit the same behavior. Managed Agents is designed to remain stable even as such capabilities change.

Decoupling for Reliability and Scale

The new system treats execution environments as interchangeable resources rather than fixed components. If a sandbox fails, the system can spin up a new one without disrupting the overall task. Similarly, the orchestration layer can restart independently by reconnecting to the session log, which acts as a persistent source of truth. This design reduces downtime and simplifies debugging, particularly for long-running or complex processes.

The decoupling also improves security. In earlier setups, sensitive credentials could be exposed within the same environment where AI-generated code was executed. Managed Agents separates these concerns, storing credentials in secure vaults and limiting direct access from execution environments. This reduces the risk of misuse, including potential prompt injection attacks.

Toward Flexible AI Infrastructure

Anthropic’s design draws inspiration from operating systems, which abstract hardware into stable interfaces that remain consistent even as underlying technology changes. Similarly, Managed Agents introduces standardized interfaces that allow different components to evolve independently.

This flexibility extends to performance. By separating reasoning from execution, the system can start generating responses without waiting for full environment setup, reducing latency. Anthropic said this approach has significantly improved time-to-first-response metrics in internal testing.

The system also supports more complex configurations, including multiple AI agents working in parallel and interacting with multiple execution environments. This allows developers to build more sophisticated workflows without being constrained by a single runtime environment.

AI & Machine Learning, News