Nvidia Invests $2B in Marvell to Expand AI Infrastructure

Nvidia has invested $2 billion in Marvell as part of a partnership to expand AI infrastructure, including custom chips, networking, and silicon photonics.

By Olivia Grant Edited by Maria Konash Published:
Nvidia invests $2B in Marvell to expand AI infrastructure with custom chips and silicon photonics. Image: Nvidia

Nvidia has announced a strategic partnership with Marvell Technology, backed by a $2 billion investment, to expand its AI infrastructure ecosystem and accelerate development of next-generation computing systems.

The collaboration connects Marvell’s hardware capabilities to Nvidia’s NVLink Fusion platform, a rack-scale architecture designed to support custom AI infrastructure. The partnership reflects growing demand for scalable systems capable of handling increasingly complex AI workloads.

The move comes as companies across the industry race to build “AI factories,” large-scale computing environments optimized for training and deploying advanced models.

Expanding the NVLink Ecosystem

Under the agreement, Marvell will develop custom XPUs and networking technologies compatible with Nvidia’s NVLink Fusion platform. Nvidia will provide core components including its Vera CPU, NVLink interconnect, Spectrum-X switches, and networking solutions such as ConnectX NICs and BlueField DPUs.

The integration allows customers to design semi-custom AI systems while remaining fully compatible with Nvidia’s broader ecosystem. This approach supports heterogeneous computing environments, where different types of processors and accelerators work together within a unified architecture.

By enabling tighter integration between custom silicon and Nvidia’s infrastructure, the companies aim to provide greater flexibility for enterprises building specialized AI systems.

Focus on Networking and AI at Scale

The partnership also includes joint development of advanced networking technologies, particularly in silicon photonics and optical interconnects. These components are critical for improving data transfer speeds and reducing latency in large-scale AI deployments.

In addition, Nvidia and Marvell plan to collaborate on AI-RAN technology, which applies AI to telecommunications networks, including 5G and future 6G systems. The goal is to transform telecom infrastructure into distributed AI computing platforms.

As AI workloads increasingly rely on distributed systems, high-speed connectivity and efficient data movement are becoming central to performance and cost efficiency.

Strategic Bet on AI Infrastructure

Nvidia’s investment underscores the importance of partnerships in scaling AI infrastructure beyond standalone chips. As demand for inference and model deployment grows, companies are focusing on building integrated systems that combine compute, networking, and storage.

The collaboration with Marvell positions Nvidia to strengthen its role not only as a GPU provider but as a broader infrastructure platform. It also highlights the increasing role of custom silicon and specialized hardware in meeting the needs of enterprise AI applications.

AI & Machine Learning, Cloud & Infrastructure, News, Startups & Investment

Nebius Plans Massive 310MW AI Data Center in Finland

Nebius will build a 310MW AI data center in Finland, expanding Europe’s growing push to develop large-scale compute infrastructure.

By Olivia Grant Edited by Maria Konash Published:
Nebius plans 310MW AI data center in Finland, expanding Europe’s AI infrastructure amid rising competition. Image: Joakim Honkasalo / Unsplash

Nebius has announced plans to build a large-scale AI data center in Finland, as Europe accelerates efforts to expand computing infrastructure needed for artificial intelligence.

The facility will be located in Lappeenranta and is expected to reach a capacity of up to 310 megawatts, making it one of the largest AI data centers in the region. Nebius said the site is scheduled to begin initial operations in 2027.

The project forms part of the company’s broader strategy to scale global AI infrastructure and meet rising demand for compute resources.

Europe Accelerates AI Infrastructure Buildout

The announcement comes amid a wave of data center investments across Europe, as governments and companies seek to reduce reliance on external infrastructure and support domestic AI development.

French startup Mistral recently secured $830 million in debt financing to operate a data center near Paris, while additional projects have been announced in Sweden and other parts of the region. Meanwhile, companies including Nvidia and institutional investors have backed large-scale AI campuses, highlighting the growing importance of compute capacity.

Nebius, headquartered in the Netherlands and listed in the United States, has positioned itself as a key provider of AI infrastructure in Europe. The company said it is targeting more than 3 gigawatts of contracted power globally by the end of the year, with over 750 megawatts already secured across Europe, the Middle East, and Africa.

Energy and Scaling Challenges

Despite strong momentum, Europe faces structural challenges in building AI infrastructure. Energy costs remain higher than in the United States, and developers often encounter delays related to grid access and permitting.

These constraints have pushed companies to carefully select locations that offer reliable energy supply and supportive regulatory environments. Finland, with its access to renewable energy and established data center ecosystem, has become an attractive destination for such projects.

Nebius is also expanding beyond Europe, with plans for a gigawatt-scale AI data center in Missouri, reflecting a dual strategy of regional expansion and global diversification.

AI & Machine Learning, Cloud & Infrastructure, News

Anthropic Accidentally Exposes Claude Code Source Again

Anthropic has again exposed the source code of its Claude Code tool due to a packaging error, raising concerns over software release practices.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic leak exposes Claude Code source via npm error, raising security concerns. Image: Sasun Bughdaryan / Unsplash

Anthropic has inadvertently exposed the full source code of its Claude Code tool for the second time in a year, following a packaging error that left sensitive development files publicly accessible.

The issue was discovered on March 31 by security researcher Chaofan Shou, who found that the latest version of Claude Code included a source map file in its npm package. This file allowed reconstruction of the tool’s underlying TypeScript codebase, effectively revealing the entire internal implementation.

The exposure was not the result of a cyberattack but a configuration oversight, highlighting potential gaps in software release processes at a time when AI tools are increasingly used in enterprise environments.

Source Map Error Reveals Full Codebase

The leak stemmed from a file known as a source map, typically used during development to map compiled code back to its original human-readable form. While useful for debugging, such files are usually removed before public release.

In this case, the source map enabled access to approximately 1,900 internal source files, including components related to API design, telemetry systems, encryption mechanisms, and inter-process communication.

Because the file was included in a public npm package, the code was easily accessible and quickly archived in external repositories. Within hours, copies of the codebase had spread across developer platforms.

Importantly, the leak did not include model weights or user data, and there is no indication that customer information was compromised.

Repeat Incident Raises Concerns

This is the second time Anthropic has faced a similar issue. An earlier version of Claude Code was exposed in 2025 under comparable circumstances, prompting the company to remove the affected files.

The recurrence of the same type of error has raised questions about internal controls and release validation processes, particularly for tools aimed at professional developers.

While the exposure does not pose immediate risks to users, it reveals detailed insights into the system’s architecture and internal logic. This level of transparency could make it easier for external parties to analyze, replicate, or potentially exploit aspects of the tool.

Growing Scrutiny on AI Tooling

The incident comes as AI development platforms are becoming central to software engineering workflows, increasing the importance of reliability and security in their deployment.

Anthropic has not issued a public statement on the latest leak. However, the situation is likely to draw attention from both developers and enterprise customers who rely on such tools for critical operations.

The episode also unfolds alongside broader developments at the company, including a separate data leak that revealed details of its upcoming Claude Mythos model, described internally as a major leap in AI capabilities. Together, these incidents highlight the operational risks facing fast-moving AI firms as they scale both their technology and product ecosystems.

Starcloud Hits $1.1B Valuation to Build AI Data Centers in Space

Starcloud reaches $1.1B valuation to build orbital AI data centers, launching GPU-powered satellites and targeting space-based compute infrastructure.

By Samantha Reed Edited by Maria Konash Published:
Starcloud hits $1.1B valuation, advancing plans for GPU-powered AI data centers in space. Image: NASA / Unsplash

Starcloud has reached a $1.1 billion valuation following its latest funding round, positioning the startup among the fastest to achieve unicorn status after graduating from Y Combinator.

The company raised approximately $170 million in its Series A round, led by Benchmark and EQT Ventures, bringing total funding to around $200 million. The investment reflects growing interest in space-based computing as demand for AI infrastructure accelerates and terrestrial data center expansion faces constraints.

Starcloud is developing orbital data centers designed to process AI workloads in space, a concept that aims to address limitations related to energy, land use, and regulatory barriers on Earth.

Building AI Infrastructure in Orbit

The company launched its first satellite in November 2025, equipped with an Nvidia H100 GPU. According to Starcloud, the satellite successfully trained an AI model in orbit and ran a version of Google’s Gemma, marking an early demonstration of space-based compute capabilities.

A second satellite, Starcloud 2, is scheduled for launch later this year. It will include more advanced hardware, such as Nvidia’s Blackwell chip and an AWS server module, alongside additional computing systems.

The company’s long-term focus is on Starcloud 3, a three-ton spacecraft designed to function as a full-scale orbital data center. With a planned capacity of 200 kilowatts, the system is intended to be deployed using SpaceX’s Starship rocket and could represent the first space-based compute platform capable of competing with terrestrial data centers on cost.

High Risk, High Infrastructure Costs

Starcloud’s business model depends heavily on future reductions in launch costs. CEO Philip Johnston estimates that orbital data centers could become cost-competitive if launch prices fall to around $500 per kilogram.

However, this assumption relies on the successful commercialization of next-generation launch systems such as Starship, which is not expected to operate at scale until the late 2020s. Until then, Starcloud plans to continue deploying smaller satellites using existing rockets.

The company is also addressing significant technical challenges, including power generation, thermal management, and synchronization between distributed computing nodes in orbit. These factors are critical for scaling from small inference tasks to larger workloads such as AI model training.

Emerging Competition in Space Compute

Starcloud is part of a growing group of companies exploring space-based computing, alongside efforts from startups and large technology firms. The concept has gained attention as AI models require increasing amounts of energy and compute capacity.

Despite early progress, the scale of space-based infrastructure remains limited. Only a small number of advanced GPUs have been deployed in orbit, compared to millions used in terrestrial data centers.

At the same time, larger players are exploring similar ambitions. SpaceX has proposed deploying large-scale compute infrastructure in orbit, potentially creating direct competition in the long term.

Starcloud’s rapid rise highlights both the opportunity and uncertainty in this emerging sector. While the company has demonstrated early technical feasibility, the viability of orbital data centers will depend on advances in launch economics, hardware design, and sustained demand for distributed AI computing.

AI & Machine Learning, News, Space & Future Tech

Claude Can Now Operate Your Computer from the Command Line

Anthropic has introduced computer control in Claude Code CLI, allowing the AI agent to operate apps, interact with interfaces, and automate GUI tasks on macOS.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic adds computer control to Claude CLI, enabling app automation and macOS interaction. Image: Claude Code

Anthropic has introduced a new “computer use” capability for its Claude Code CLI, enabling AI agents to directly control a user’s computer, interact with applications, and execute tasks through graphical interfaces.

The feature, currently available as a research preview on macOS, allows Claude to open apps, click through interfaces, type inputs, and capture screenshots. It represents a significant step toward fully autonomous AI agents capable of performing real-world tasks beyond text-based interactions.

Computer use is available to users on Pro and Max plans and requires the latest version of Claude Code in an interactive session.

From Terminal to Full System Control

The new capability extends Claude’s functionality beyond traditional command-line operations. Instead of relying solely on APIs or scripts, the AI can now interact with software in the same way a human user would.

This includes building and testing applications, navigating user interfaces, and debugging visual issues. For example, Claude can compile a macOS app, launch it, click through its interface, and verify functionality within a single workflow.

The system prioritizes more precise tools when available, such as APIs or command-line operations, but defaults to computer control when tasks require direct interaction with graphical environments.

This approach enables automation of tasks that previously required manual input, including working with proprietary software, simulators, or tools without programmatic access.

Safeguards and Controlled Access

Anthropic has implemented several safeguards to manage risks associated with granting AI access to a user’s system. Access is controlled on a per-application basis, with users required to approve each app before Claude can interact with it.

The system also enforces a session-based control model, allowing only one active instance to operate the computer at a time. Users can interrupt actions at any moment using keyboard commands, ensuring they retain control.

Certain applications, such as system settings or file managers, trigger additional warnings due to their broader access permissions. The terminal itself is excluded from screenshots to prevent feedback loops or prompt injection risks.

These controls highlight the challenges of balancing autonomy and security as AI agents move closer to operating directly within user environments.

Expanding the Agentic AI Model

The introduction of computer control reflects a broader shift toward agent-based AI systems that can execute complex, multi-step workflows across different tools and environments.

Anthropic’s approach aligns with similar developments across the industry, where companies are building AI agents capable of acting on behalf of users in real time. By enabling direct interaction with operating systems, Claude moves closer to functioning as a general-purpose digital assistant.

The feature also complements other recent updates focused on integrating AI into everyday workflows, including support for messaging platforms and developer tools.

AI & Machine Learning, News

E*Trade Emerges as Key Retail Partner in SpaceX IPO

Morgan Stanley’s E*Trade is in talks to lead retail distribution for SpaceX’s record IPO, potentially sidelining platforms like Robinhood and SoFi.

By Samantha Reed Edited by Maria Konash Published:
E*Trade may lead retail share sales in SpaceX IPO, edging out Robinhood and SoFi in one of the largest listings ever. Image: E*Trade

Morgan Stanley’s E*Trade is in discussions to take a leading role in distributing shares to retail investors in SpaceX’s upcoming initial public offering, according to the report from Reuters citing people familiar with the matter.

The move could give E*Trade a significant advantage over rival platforms such as Robinhood and SoFi, which have also sought participation in the deal. SpaceX is reportedly considering limiting or excluding those firms from the retail allocation, an unusual step given their growing presence in major IPOs in recent years.

The SpaceX listing is expected to be the largest in history, with strong demand anticipated from both institutional and individual investors.

Retail Access Becomes Strategic Battleground

Retail investors are expected to play a larger role than usual in the SpaceX IPO. The company is reportedly considering allocating up to 30% of shares to individual investors, well above the typical 5% to 10% seen in most public offerings.

Morgan Stanley, a lead underwriter on the deal, is expected to channel a significant portion of that allocation through its E*Trade platform. This strategy would allow the bank to capture more of the retail order flow internally, rather than relying on third-party brokerages.

Robinhood and SoFi remain in discussions but may receive a smaller share of the offering, if any. Fidelity is also reportedly seeking a role in distributing shares through its platform.

The plans are still under discussion and could change as the IPO approaches.

Morgan Stanley’s Push Into Retail

A prominent role in the SpaceX IPO would mark a major win for E*Trade, which Morgan Stanley acquired for $13 billion in 2020. The bank has since expanded its focus on retail trading as part of a broader strategy to diversify revenue beyond traditional investment banking and wealth management.

Securing a central position in a high-profile IPO could strengthen E*Trade’s competitive standing against platforms such as Charles Schwab and Interactive Brokers, particularly as retail participation in equity markets continues to grow.

The allocation strategy also reflects a broader shift in how IPOs are structured, with increased emphasis on engaging individual investors alongside institutional buyers.

Implications for the IPO Market

The SpaceX IPO is shaping up to be a landmark event, not only because of its scale but also due to its unconventional structure and strong retail focus. The involvement of platforms like E*Trade highlights how distribution strategies are evolving in response to changing investor dynamics.

The outcome could influence how future large-scale listings are structured, particularly for technology and AI-driven companies seeking to tap both institutional capital and retail enthusiasm.

As SpaceX moves closer to going public, decisions around share allocation and distribution will play a critical role in shaping demand and setting precedents for the next wave of high-profile IPOs.

AI & Machine Learning, News, Startups & Investment
Exit mobile version