Selloff Hits AI Sector as OpenAI Faces Growth Scrutiny

AI stocks fell sharply after reports that OpenAI missed growth targets and raised internal financial concerns. The news triggered a ripple effect across cloud and chip companies.

By Samantha Reed Edited by Maria Konash Published:
Selloff Hits AI Sector as OpenAI Faces Growth Scrutiny
OpenAI growth concerns spark AI stock selloff affecting cloud providers chipmakers and investors amid rising scrutiny of revenue and compute costs. Image: Maxim Hopman / Unsplash

Shares of artificial intelligence companies dropped on April 28 after a report by The Wall Street Journal revealed that OpenAI had missed internal targets for user growth and revenue. The report also cited concerns from CFO Sarah Friar about the company’s ability to sustain future spending on large-scale computing contracts. The news triggered a broad selloff across AI-related stocks, reflecting investor sensitivity to growth signals in the sector. The reaction comes as OpenAI prepares for a potential initial public offering that could value the company at up to $1 trillion.

The impact was felt across companies closely tied to OpenAI’s ecosystem. Shares of Oracle fell 3.4% amid concerns about financing its large data center commitments, including a reported $300 billion cloud deal with OpenAI. CoreWeave, which recently signed a multibillion-dollar contract with OpenAI, also declined. Meanwhile, Arm Holdings dropped more than 6%, reflecting broader pressure on chipmakers linked to AI demand.

Investor reaction extended beyond U.S. markets. SoftBank Group, a major OpenAI backer, saw its shares fall nearly 10% in Tokyo trading. The company has committed billions in funding to OpenAI and has restructured its portfolio to support those investments, including reducing stakes in other technology firms. Market participants expressed concern about the sustainability of such commitments if OpenAI’s growth slows.

Market Reaction

The selloff highlights how closely valuations across the AI sector are tied to expectations around leading companies. OpenAI’s position at the center of the ecosystem means that changes in its outlook can influence sentiment across cloud providers, chip manufacturers, and investors. Even companies with indirect exposure may experience volatility as markets reassess demand for AI infrastructure.

For investors, the episode underscores the risks associated with rapid expansion in AI. Large-scale investments in data centers and computing capacity depend on sustained growth in usage and revenue. Any indication of slower adoption can quickly translate into broader market corrections.

Industry Context

The development comes at a time when AI companies are investing heavily in infrastructure and preparing for major funding events. OpenAI’s anticipated IPO and large-scale partnerships have positioned it as a key driver of industry momentum. At the same time, competitors and partners are committing billions to support AI workloads, increasing financial exposure across the ecosystem.

Recent deals involving cloud providers and semiconductor companies reflect a broader trend toward long-term infrastructure commitments. As the market matures, investors are beginning to scrutinize whether demand will keep pace with spending. The reaction to OpenAI’s reported challenges suggests that confidence in the sector remains closely tied to the performance of its leading players.

AI & Machine Learning, News, Startups & Investment

OpenAI Turns to Amazon While Loosening Microsoft Ties

OpenAI is deepening ties with Amazon while restructuring its long-standing partnership with Microsoft. The shift reflects growing demand for flexible cloud access and AI infrastructure.

By Maria Konash Published:
OpenAI Turns to Amazon While Loosening Microsoft Ties
OpenAI expands Amazon ties as Microsoft deal shifts, signaling move to multi-cloud AI infrastructure. Image: José Ramos / Unsplash

OpenAI is expanding its relationship with Amazon while simultaneously restructuring its long-standing partnership with Microsoft. The company’s revenue chief, Denise Dresser, said the two developments are unrelated, but analysts view them as part of a broader shift in OpenAI’s cloud strategy. The changes come as AI companies seek greater flexibility to deploy models across multiple infrastructure providers amid surging demand for compute capacity.

OpenAI’s collaboration with Amazon has expanded rapidly in recent months. The companies disclosed a $38 billion commitment for cloud services through Amazon Web Services, followed by Amazon’s pledge to invest up to $50 billion in OpenAI. As part of the arrangement, OpenAI plans to use AWS infrastructure, including custom Trainium chips, and has increased its total spending commitment with Amazon by an additional $100 billion. The partnership also includes joint development of customized AI models for Amazon’s internal teams and products.

At the same time, OpenAI has revised key elements of its agreement with Microsoft. The updated terms remove Microsoft’s exclusive access to OpenAI’s intellectual property and allow OpenAI to serve customers across multiple cloud providers, including Amazon and Google. Revenue-sharing payments from OpenAI to Microsoft will continue through 2030 but are now subject to a cap, while Microsoft will no longer pay a revenue share to OpenAI. The companies maintain that Microsoft remains a primary cloud partner, with OpenAI products still launching first on Azure in most cases.

Strategic Realignment

The evolving partnerships highlight a shift toward multi-cloud strategies in AI. OpenAI’s earlier reliance on Microsoft’s Azure platform is giving way to a more diversified approach, allowing the company to reach enterprise customers across different environments. This flexibility is increasingly important as businesses standardize on different cloud providers and expect interoperability.

For cloud providers, access to leading AI models has become a competitive priority. Amazon’s deeper integration with OpenAI enables it to offer customers direct access to widely used models, while Microsoft continues to leverage its early investment and infrastructure ties. The result is a more fluid ecosystem in which partnerships are less exclusive and more transactional.

Industry Dynamics

The changes reflect broader trends in the AI industry, where infrastructure constraints are driving collaboration even among competitors. Both OpenAI and rivals like Anthropic are securing capacity from multiple cloud providers to meet demand for training and inference workloads. At the same time, cloud companies are diversifying their model offerings, integrating technologies from multiple AI developers.

Despite signs of tension, the relationships remain interdependent. Microsoft continues to be a major investor in OpenAI, while OpenAI relies on its infrastructure and enterprise reach. Similarly, Amazon’s growing role does not replace existing partnerships but adds another layer to the ecosystem. The shift suggests that the future of AI infrastructure will be defined by overlapping alliances rather than exclusive deals.

AI & Machine Learning, Cloud & Infrastructure, News, Startups & Investment

Musk vs. OpenAI: Trial Tests Nonprofit Vision vs. Commercial Reality

A high-profile trial between Elon Musk and OpenAI leaders has begun, centering on claims the company abandoned its nonprofit mission. The case could reshape governance in leading AI firms.

By Samantha Reed Edited by Maria Konash Published:
Musk vs. OpenAI: Trial Tests Nonprofit Vision vs. Commercial Reality
Musk-OpenAI trial probes nonprofit roots and commercialization, shaping AI governance and competition. Image: Tingey Injury Law Firm / Unsplash

A trial involving Elon Musk and Sam Altman has begun in California, focusing on the origins and structure of OpenAI. The case centers on Musk’s claim that the organization deviated from its nonprofit mission when it established a commercial arm in 2018. Musk, a co-founder and early donor, argues that the shift represents a breach of charitable trust. OpenAI disputes this, framing the lawsuit as a competitive move by Musk, who now leads rival AI ventures.

Musk testified that the dispute is about protecting the integrity of charitable organizations, stating that allowing such transitions could undermine public trust. His legal team emphasized his early contributions, including tens of millions of dollars in funding during OpenAI’s nonprofit phase. Musk is seeking billions in damages, which his lawyers say should be directed back into the organization’s nonprofit activities. He is also calling for governance changes, including leadership restructuring.

OpenAI’s legal team countered that Musk supported the company’s evolution before leaving and is now attempting to weaken a competitor. They argued that he pushed for greater control over the organization, including proposals to integrate it with Tesla. When those efforts failed, OpenAI claims, Musk distanced himself from the company. The defense also highlighted Musk’s later involvement in AI through xAI, suggesting the lawsuit is tied to competitive pressures.

Legal and Industry Implications

The case raises questions about how AI organizations balance nonprofit origins with the need for large-scale funding and commercialization. Many leading AI firms have adopted hybrid structures to attract investment while maintaining stated public-interest goals. A ruling in Musk’s favor could prompt stricter scrutiny of such arrangements and influence how future AI ventures are structured.

For the broader industry, the trial reflects intensifying competition among AI developers. As companies race toward advanced systems, governance and funding models are becoming as critical as technical progress. The outcome may shape investor expectations and regulatory approaches to AI development.

Background and Context

OpenAI was founded in 2015 as a nonprofit with a mission to develop AI for public benefit, before introducing a for-profit entity to scale its operations. The decision helped fuel the development of products like ChatGPT and positioned the company at the center of the commercial AI market. Musk’s departure from OpenAI preceded this shift, though both sides disagree on the extent of his involvement in the decision.

The trial also unfolds amid broader tensions in the AI sector, where leading figures increasingly compete across overlapping domains. A verdict is expected in late May, and could set a precedent for how disputes over AI governance and commercialization are handled in the future.

AI & Machine Learning, News, Regulation & Policy

Anthropic Integrates Claude With Adobe, Blender, and Creative Tools

Anthropic has launched connectors that integrate Claude with major creative software platforms. The move aims to streamline workflows and expand AI-assisted production across design, audio, and 3D tools.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic Integrates Claude With Adobe, Blender, and Creative Tools
Anthropic adds Claude connectors for Adobe, Blender, and more to streamline creative workflows. Image: Anthropic

Anthropic has introduced a new set of connectors that integrate its Claude AI models with widely used creative software platforms, including Adobe, Blender, Autodesk, Ableton, and Splice. The connectors allow Claude to interact directly with these tools, enabling users to automate tasks, generate content, and manage workflows through natural language. The initiative reflects growing demand for AI systems that operate inside the existing creative environments rather than as standalone applications.

The connectors provide different capabilities depending on the platform. In Adobe’s Creative Cloud ecosystem, Claude can assist with generating and editing images, videos, and designs across multiple applications. Blender integration allows users to interact with its Python API using natural language, enabling tasks such as debugging scenes or creating scripts. In Autodesk Fusion, Claude can help design and modify 3D models, while Ableton integration connects AI responses to official documentation for music production tools. Other integrations, such as with SketchUp and Resolume Arena, extend support to architecture and live media production.

Anthropic is also positioning Claude as a tool for managing complex creative workflows. The system can bridge multiple applications by translating formats, synchronizing assets, and automating repetitive processes such as batch editing and file organization. New features like Claude Design allow users to explore interface concepts and export them into other tools, starting with platforms such as Canva. The company said it is working with educational institutions, including Rhode Island School of Design and Goldsmiths, University of London, to integrate these capabilities into creative curricula.

Workflow Transformation

The release highlights a shift in how AI is being applied in creative industries. Instead of replacing creative roles, tools like Claude are being positioned as assistants that reduce manual work and expand capabilities. By automating repetitive tasks and enabling faster iteration, AI can allow professionals to focus more on concept development and execution.

For studios and independent creators, tighter integration with existing tools reduces friction in adopting AI. Rather than switching between platforms, users can incorporate AI directly into their workflows, improving efficiency without disrupting established processes. This approach may accelerate adoption across industries such as design, media production, and architecture.

Creative Tech Ecosystem

The move reflects broader competition among AI providers to embed their models within professional software ecosystems. Partnerships with established platforms give AI companies access to large user bases while strengthening their relevance in specialized workflows. At the same time, software vendors benefit from adding AI-driven features without building models independently.

As AI capabilities evolve, integration depth is becoming a key differentiator. Companies are moving beyond standalone chat interfaces toward systems that can interact with files, tools, and pipelines in real time. Anthropic’s connector strategy suggests that the future of creative AI will be defined less by individual applications and more by how seamlessly models operate across entire production environments.

Xiaomi Releases MiMo V2.5 Open Models for OpenClaw

Xiaomi has launched two open-source AI models optimized for agent-based tasks with high efficiency and low cost. The release targets growing demand for scalable enterprise AI.

By Daniel Mercer Edited by Maria Konash Published:
Xiaomi Releases MiMo V2.5 Open Models for OpenClaw
Xiaomi MiMo V2.5 delivers efficient open-source AI for agents with low costs and million-token context. Image: Xiaomi

Xiaomi has released two new open-source large language models, MiMo-V2.5 and MiMo-V2.5-Pro, designed for agent-based AI systems such as OpenClaw. The models are distributed under the permissive MIT license, allowing developers and enterprises to use, modify, and deploy them commercially without restrictions. MiMo-V2.5 features 310 billion parameters with 15 billion active during inference, while the Pro version scales to 1.02 trillion parameters with 42 billion active. Both models support context windows of up to one million tokens, targeting long-running and complex tasks.

The release focuses on efficiency in agent workflows, where AI systems perform multi-step operations such as coding, automation, and task orchestration. According to Xiaomi’s benchmarks, MiMo-V2.5-Pro achieved a 63.8 percent success rate on ClawEval while using around 70,000 tokens per task cycle. This represents significantly lower token consumption compared with competing models from Anthropic, Google, and OpenAI. Lower token usage translates directly into reduced operating costs, a key factor as AI pricing shifts toward usage-based billing.

Pricing for the models reflects this positioning. The base MiMo-V2.5 starts at approximately $0.40 per million input tokens and $2.00 per million output tokens, while the Pro version is priced at $1.00 and $3.00 respectively for standard context sizes. Xiaomi also offers extended context support up to one million tokens without imposing significant pricing multipliers, contrasting with industry trends where longer context windows often incur higher costs. The company has additionally introduced subscription-based token plans and temporary incentives such as free cache usage to encourage adoption.

Efficiency as a Differentiator

The MiMo models highlight a shift toward optimizing cost-performance in AI systems, particularly for agentic use cases. By using a mixture-of-experts architecture, the models activate only a subset of parameters during each task, reducing computational overhead while maintaining capability. This approach is increasingly important as enterprises deploy AI agents that operate continuously and consume large volumes of tokens.

For developers, the combination of open licensing and lower costs provides an alternative to proprietary models with usage fees and restrictions. Organizations can run the models locally or in private cloud environments, offering greater control over data and expenses. This flexibility is particularly relevant for applications involving long-running processes or sensitive information.

Open Models Gain Ground

Xiaomi’s release reflects broader momentum behind open-source AI as competition intensifies. The gap between open and closed models has narrowed, with open systems increasingly matching proprietary offerings in performance while offering more flexibility. The MIT license further positions MiMo as infrastructure that can be integrated into a wide range of applications without legal or commercial barriers.

The move also aligns with changes in AI economics, as providers shift from subscription models to metered usage. In this environment, efficient models that reduce token consumption can offer a significant advantage. Xiaomi’s strategy suggests that cost and openness may become as important as raw performance in determining which AI platforms gain adoption in enterprise and developer ecosystems.

AI & Machine Learning, News

Meta Eyes Space Solar Energy to Keep Data Centers Running Overnigh

Meta has signed a deal with Overview Energy to beam solar power from space to Earth. The project aims to supply constant energy for AI data centers, even at night.

By Olivia Grant Edited by Maria Konash Published:
Meta Eyes Space Solar Energy to Keep Data Centers Running Overnigh
Meta signs space-based solar deal to power AI data centers via continuous infrared transmission. Image: Meta

Meta has signed an agreement with startup Overview Energy to secure up to 1 gigawatt of solar power generated in space and transmitted to Earth. The deal is part of Meta’s broader effort to meet rising energy demands from artificial intelligence infrastructure. The approach involves satellites collecting solar energy, converting it into infrared light, and beaming it to ground-based solar farms. Unlike traditional solar systems, this method could provide power continuously, including at night.

Overview Energy’s system is designed to integrate with existing solar infrastructure, avoiding the need for entirely new power grids. The company plans to deploy a fleet of satellites that transmit energy to large-scale solar farms, which then convert the infrared light into electricity. According to the company, the beam is designed to be safe for human exposure and avoids the regulatory challenges associated with high-power lasers or microwave transmission. Meta has not disclosed the financial terms of the agreement but confirmed it has reserved capacity under the arrangement.

The project remains in early stages, with key milestones ahead. Overview has already demonstrated energy transmission from an aircraft and plans its first satellite test in January 2028. Full-scale deployment could begin around 2030, with a long-term goal of operating up to 1,000 satellites in geostationary orbit. Each satellite is expected to deliver power for more than a decade, supporting continuous energy supply across regions as the Earth rotates.

Powering AI Infrastructure

The agreement highlights the growing energy demands of AI systems and data centers. Meta’s operations consumed more than 18,000 gigawatt-hours of electricity in 2024, and demand is expected to rise as AI workloads expand. Traditional solar power requires storage or backup generation to operate overnight, adding cost and complexity. By enabling round-the-clock solar generation, space-based energy could improve efficiency and reduce reliance on fossil fuels.

For technology companies, securing stable and scalable energy sources has become a strategic priority. Large AI models require constant compute availability, making intermittent energy sources less practical without significant storage investment. If successful, Overview’s approach could reshape how renewable energy supports data-intensive industries.

Emerging Energy Technologies

Space-based solar power has long been explored but has faced technical and economic challenges. Advances in satellite design, energy transmission, and cost reduction are now bringing the concept closer to practical deployment. Overview’s strategy focuses on using lower-intensity infrared beams and existing solar farms to simplify implementation.

The Meta partnership signals increasing interest from major technology firms in unconventional energy solutions. As competition in AI intensifies, companies are investing not only in computing infrastructure but also in the energy systems required to sustain it. The success of projects like this will depend on scaling the technology, meeting regulatory requirements, and proving long-term reliability in real-world conditions.

AI & Machine Learning, Cloud & Infrastructure, News