OpenAI Raises $110 Billion at $730 Billion Valuation

OpenAI secured $110 billion in new funding at a $730 billion pre-money valuation, backed by SoftBank, NVIDIA, and Amazon to expand AI infrastructure and global reach.

By Maria Konash Published:
OpenAI raises $110B from SoftBank, NVIDIA, and Amazon. Photo: Zac Wolff / Unsplash

OpenAI announced $110 billion in new investment at a $730 billion pre-money valuation, marking one of the largest private funding rounds in technology history. The round includes $30 billion each from SoftBank Group Corp and NVIDIA, and $50 billion from Amazon. Additional financial investors are expected to join as the round progresses.

The company said the funding will support rising global demand for artificial intelligence products across consumers, developers, and enterprises. OpenAI identified compute, distribution, and capital as the core requirements to scale access to its AI systems worldwide.

As part of the announcement, OpenAI signed a multi-year strategic partnership with Amazon and expanded its collaboration with NVIDIA to secure next-generation inference and training infrastructure.

Infrastructure Expansion and Strategic Partnerships

Under the NVIDIA agreement, OpenAI will utilize 3 gigawatts of dedicated inference capacity and 2 gigawatts of training capacity on Vera Rubin systems. This builds on Hopper and Blackwell systems already deployed across partners including Microsoft, Oracle Cloud Infrastructure, and CoreWeave. The expanded compute footprint is designed to accelerate both model training and real-time deployment at global scale.

The Amazon partnership focuses on accelerating AI adoption among enterprises, startups, and consumers. OpenAI said the collaboration strengthens its distribution channels and infrastructure capabilities while deepening integration across cloud environments.

Product Growth Across Consumer and Enterprise

The funding follows significant growth across OpenAI’s product portfolio. Codex, the company’s AI coding system, has seen weekly users more than triple since the start of the year to 1.6 million. The tool enables individuals to build and deploy software workflows that previously required larger engineering teams.

ChatGPT remains the company’s largest consumer-facing product, with more than 900 million weekly active users and over 50 million subscribers. OpenAI reported that January and February are on track to be the strongest months for new subscriber additions in its history. The company said product performance continues to improve with faster responses, greater reliability, and stronger safety systems as usage scales.

In the enterprise segment, more than nine million paying business users rely on ChatGPT for workplace applications. Organizations across sectors are deploying AI systems across engineering, support, finance, sales, and operations. OpenAI’s Frontier platform supports enterprise customers in building and managing AI-powered workflows.

Foundation Impact

The new valuation increases the value of the OpenAI Foundation’s stake in OpenAI Group to more than $180 billion. The company said the strengthened balance sheet will expand philanthropic capacity in areas including health research and AI resilience.

Chief Executive Sam Altman said the partnerships reflect a shared ambition to scale reliable and broadly useful AI systems globally. The funding positions OpenAI to expand infrastructure capacity and accelerate deployment as frontier AI moves into daily use.

Alibaba’s Open-Source HappyHorse Model Tops Global AI Video Leaderboard

HappyHorse-1.0, an open-source AI video model, has topped global benchmarks, outperforming leading proprietary systems and signaling a shift in the video generation market.

By Samantha Reed Edited by Maria Konash Published:
HappyHorse-1.0 tops benchmarks, intensifying competition between open and proprietary AI video models. Image: Detail.co / Unsplash

Alibaba’s open-source AI video model, HappyHorse-1.0, has surged to the top of global performance rankings, outperforming leading proprietary systems and shaking up the rapidly evolving video generation market. The model now leads the Artificial Analysis Video Arena leaderboard in multiple categories, surpassing ByteDance’s Seedance 2.0 by a significant margin in blind user evaluations.

HappyHorse-1.0 achieved between 1333 and 1357 Elo points in text-to-video generation, beating its closest competitor by nearly 60 points. It also set a new record in image-to-video tasks with scores exceeding 1390 Elo, while ranking second in more complex audio-inclusive benchmarks. The results are notable not only for performance, but because the model is fully open source with commercial licensing, making its capabilities broadly accessible.

The system uses a 15-billion-parameter Transformer architecture designed to generate synchronized audio and video in a single pass. It supports features such as native lip-sync across multiple languages, including Mandarin, English, and Japanese, and can produce 1080p video in under a minute using a single NVIDIA H100 GPU. The full model weights, along with distilled versions and supporting tools, have been released publicly, allowing developers to run the system locally.

HappyHorse-1.0 was developed by an independent research team with roots in Alibaba Group’s former Taotian research unit and led by Zhang Di, previously a senior executive at Kuaishou. The team emphasized a focus on real-world user preference in evaluation, rather than traditional benchmark optimization.

Open Source Gains Ground

The model’s success highlights a broader shift in the AI industry, where open-source systems are increasingly competitive with proprietary offerings. Historically, leading performance in areas like video generation has been dominated by closed models developed by large technology companies. HappyHorse-1.0 suggests that smaller, independent teams can now rival or exceed those capabilities.

This dynamic mirrors trends seen in other areas of AI, including language models and image generation, where open ecosystems have accelerated innovation and lowered barriers to entry. By releasing full model weights and tools, the developers are enabling rapid experimentation and customization across industries.

Implications for the AI Video Market

The emergence of a high-performing open-source video model could intensify competition among AI providers, particularly in creative and media applications. Lower-cost access to advanced video generation may benefit startups and developers, while putting pressure on proprietary platforms to differentiate through features, integration, or performance.

At the same time, the availability of powerful video generation tools raises questions around misuse, content authenticity, and regulation. As capabilities improve, ensuring responsible deployment will remain a key challenge for both developers and policymakers.

HappyHorse-1.0’s rapid rise signals that the balance of power in AI video may be shifting, with open-source innovation playing an increasingly central role in shaping the next phase of the market.

AI & Machine Learning, News

A Zuckerberg AI Avatar Could Soon Talk to Meta Employees

Meta is reportedly building an AI-powered version of Mark Zuckerberg to interact with employees, as part of its broader push into advanced AI systems.

By Samantha Reed Edited by Maria Konash Published:
Meta builds AI avatar of Zuckerberg, underscoring its shift from metaverse to AI. Image: Farhat Altaf / Unsplash

Meta Platforms is developing an AI-powered digital version of CEO Mark Zuckerberg that could interact with employees when he is unavailable, according to reports. The avatar is expected to be trained on Zuckerberg’s voice, appearance, and communication style, marking a new step in Meta’s broader effort to integrate artificial intelligence into both internal operations and consumer-facing products.

The project comes as Meta accelerates its investment in AI, with plans to spend between $115 billion and $135 billion on AI-related infrastructure in 2026 alone. The company has also been aggressively hiring talent for its newly formed Superintelligence Labs, led by Alexandr Wang. The division recently launched Muse Spark, a multimodal AI model designed to compete with leading systems in reasoning and agent-based tasks, with additional models expected later this year.

Meta’s ambitions extend beyond language models. The company is developing photorealistic 3D avatars capable of natural conversation, with the Zuckerberg replica serving as an early test case. If successful, the technology could expand to allow creators and public figures to build AI versions of themselves, potentially opening new forms of digital interaction and content creation.

AI Meets Leadership and Identity

The concept of AI-generated executive avatars reflects a broader trend among technology leaders experimenting with digital replicas. Executives such as Sebastian Siemiatkowski and Eric Yuan have explored similar ideas, using AI versions of themselves for tasks like earnings presentations or meetings. Investor Ray Dalio has also deployed a digital avatar to share his views online.

Meta is reportedly also working on an AI agent tailored for executive use, capable of helping Zuckerberg manage daily tasks such as retrieving information and coordinating decisions. Together, these efforts point to a future where AI tools augment leadership roles rather than simply supporting general productivity.

From Metaverse to AI

The initiative highlights Meta’s strategic pivot away from its earlier focus on the metaverse toward artificial intelligence as its core priority. Avatars were once central to Zuckerberg’s vision for virtual worlds, but early efforts drew criticism for limited realism and failed to gain widespread traction.

Now, advances in AI are enabling more sophisticated and lifelike digital representations, potentially reviving aspects of that vision in a new form. Meta has previously experimented with personality-driven AI chatbots modeled on celebrities, though the company faced backlash over safety concerns, particularly for younger users.

The development of an AI version of Zuckerberg underscores how far the company is willing to push the boundaries of identity and interaction in the AI era. Whether the concept gains traction internally or expands into consumer products may depend on how effectively Meta can balance innovation with user trust and ethical considerations.

AI & Machine Learning, Enterprise Tech, News

Anthropic Adds Ultraplan to Claude Code, Moving Planning to the Cloud

Anthropic has introduced Ultraplan for Claude Code, a new feature that shifts software planning tasks from the terminal to the cloud so developers can review, revise, and execute agent-generated plans more flexibly.

By Laura Bennett Edited by AIstify Team Published: Updated:
Anthropic has added Ultraplan to Claude Code, moving planning workflows into the cloud so developers can keep their terminal free while reviewing and refining agent-generated plans. Photo: Anthropic

Anthropic has introduced Ultraplan for Claude Code, a new feature that moves the planning phase of software work out of the terminal and into Anthropic’s cloud. The feature is designed to let developers start a task locally, have Claude draft the plan remotely, and then review or revise it in a browser before deciding where execution should happen.

Ultraplan is currently in research preview and requires Claude Code version 2.1.91 or later. To use it, developers need a Claude Code account on the web and a GitHub repository. Because the feature depends entirely on Anthropic’s cloud infrastructure, it is not available through Amazon Bedrock, Google Cloud Vertex AI, or Microsoft Foundry.

The new workflow reflects a broader shift in how AI coding tools are being used. As models become better at handling long-running and multi-step development work, the challenge is no longer just code generation. It is also about how developers supervise, comment on, and iterate with an agent when the work spans multiple stages. Ultraplan is Anthropic’s answer to that problem, offering a more structured planning surface than a local terminal window.

From the command line, users can launch Ultraplan in several ways, including through the /ultraplan command, by mentioning the keyword in a prompt, or by choosing to refine an existing local plan through the web. Once the request is sent, Claude begins researching the codebase and drafting the plan remotely while the local terminal remains available for other work. Developers can monitor status from the CLI and open the linked session when Claude needs clarification or has finished a draft.

In the browser, the generated plan appears in a dedicated review interface. Users can leave inline comments on specific passages, react with emoji, and jump through sections using an outline sidebar. Anthropic says this approach makes it easier to provide targeted feedback than replying to an entire draft inside the terminal. Claude can then revise the plan in response, allowing repeated review cycles before work begins.

Once the plan is approved, developers can choose whether execution happens in the same cloud session or returns to the terminal. If they continue in the browser, Claude implements the plan remotely and the user can later review the diff and open a pull request. If they send the plan back locally, the cloud session is archived and the terminal presents options to implement it immediately, start a new session around the plan, or save it to a file for later use.

The feature highlights Anthropic’s broader push to make Claude Code more useful for extended, multi-step software workflows rather than simple one-off prompts. By separating planning from execution and moving it to the web, Ultraplan gives developers a more flexible way to oversee complex work without tying up their local environment.

AI & Machine Learning, Cloud & Infrastructure, News

What Do We Really Think About AI? This Movie Tries to Answer

A new documentary featuring top AI leaders explores the tension between optimism and fear surrounding artificial intelligence, highlighting public uncertainty about its future.

By Samantha Reed Edited by Maria Konash Published:
New AI documentary spotlights industry leaders debating risks, reflecting public uncertainty about AI’s future. Image: Sam McGhee / Unsplash

A new documentary, The AI Doc: Or How I Became an Apocaloptimist, is bringing the debate around artificial intelligence to a broader audience, exploring both the promise and anxiety surrounding the technology. Directed by filmmaker Daniel Roher alongside Charlie Tyrell, the film premiered in theaters on March 27 and follows Roher’s personal journey as he grapples with the implications of AI while preparing to become a parent.

The documentary features interviews with some of the most influential figures in AI, including Sam Altman, Dario Amodei, and Demis Hassabis. The filmmakers conducted dozens of on-camera interviews and hundreds more off the record, aiming to capture a wide range of perspectives across the industry. Despite outreach to many high-profile figures, including Mark Zuckerberg and Elon Musk, not all agreed to participate.

Rather than focusing on breaking news, the filmmakers chose to explore deeper, more enduring questions about AI. Early in production, rapid developments in the industry, including leadership turmoil at OpenAI, made it clear that chasing headlines would quickly date the film. Instead, the project centers on fundamental issues such as what AI is, how it works, and what it means for society.

Between Optimism and Fear

A central theme of the documentary is the polarized way AI is often discussed. According to the filmmakers, public perception tends to swing between two extremes: AI as a transformative force for good or as an existential threat. The film attempts to guide viewers through that tension, presenting a more nuanced view that acknowledges both possibilities.

Producers said one of the most revealing aspects of the process was asking experts to explain AI in simple terms. Even highly accomplished scientists and executives struggled to distill complex concepts into accessible explanations, underscoring the gap between technical understanding and public awareness.

A Broader Public Conversation

The filmmakers said audience reactions have highlighted how differently people perceive AI depending on their background. Screenings have sparked discussions ranging from skepticism about the technology’s impact to concerns about its concentration among a small group of companies.

The project also reflects a shift in how AI is entering public discourse. As tools like ChatGPT and Claude become more widely used, people are interacting with AI systems directly, often without fully understanding how they work or their limitations.

For the filmmakers, the takeaway is less about providing definitive answers and more about encouraging broader participation in the conversation. As AI continues to evolve rapidly, they argue that its future should not be shaped solely by technology companies, but by a wider public engaged in questioning, debating, and understanding its impact.

AI & Machine Learning, News

Anthropic Rolls Out Claude Cowork With Enterprise Controls

Anthropic has made Claude Cowork generally available across paid plans, adding enterprise controls and analytics to support company-wide AI deployment.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Anthropic expands Claude Cowork with enterprise controls, analytics, and integrations for large-scale adoption. Image: Claude

Anthropic has made its Claude Cowork assistant generally available across all paid plans, alongside a new set of enterprise controls designed to support organization-wide deployment. The update reflects growing adoption of AI tools beyond engineering teams, as companies increasingly integrate assistants into everyday workflows such as reporting, research, and internal collaboration.

Claude Cowork, a desktop-based AI assistant for macOS and Windows, is positioned as a non-developer counterpart to Anthropic’s coding tools. Unlike browser-based chat interfaces, it can access local files directly and integrate with enterprise systems, enabling more context-aware workflows. Early usage data shows that the majority of activity comes from non-technical teams, including operations, marketing, finance, and legal, where employees are using the tool to handle supporting tasks around core business functions.

To support broader rollout, Anthropic has introduced governance features aimed at IT and admin teams. These include role-based access controls, allowing organizations to define which teams can use specific AI capabilities, as well as group-level spending limits to manage costs. The company has also added usage analytics, enabling administrators to track adoption patterns, active users, and workflow trends across teams.

Enterprise-Ready Controls and Visibility

The update places a strong emphasis on visibility and control. Claude Cowork now integrates with OpenTelemetry, allowing organizations to monitor AI activity through standard security and observability tools. Events such as tool usage, file access, and connector interactions can be tracked and analyzed, helping companies maintain oversight as AI becomes embedded in workflows.

Anthropic has also expanded its connector ecosystem. A new integration with Zoom enables the assistant to pull meeting summaries, transcripts, and action items directly into workflows. Administrators can configure permissions at a granular level, including restricting write access while allowing read-only interactions. These controls are designed to address concerns around data security and unintended actions by AI systems.

From Tools to Workflows

The rollout highlights a broader shift in how organizations use AI. Rather than asking isolated questions, employees are increasingly delegating multi-step tasks to assistants. Early adopters have used Claude Cowork to automate processes such as performance reviews, incident response workflows, and internal reporting dashboards by connecting the tool to systems like Slack, Jira, and internal databases.

This transition from query-based usage to task execution mirrors trends seen in developer tools, where AI agents are taking on more complex responsibilities. For Anthropic, expanding Cowork across all paid tiers positions the company to capture a wider share of enterprise demand.

As AI assistants become more deeply embedded in business operations, the focus is shifting from raw capability to governance, integration, and reliability. Claude Cowork’s expansion reflects that evolution, with Anthropic aiming to balance increased adoption with the controls needed to manage AI at scale.

Exit mobile version