Intel Teams Up with Musk on Terafab to Scale AI Chip Production

Intel has joined Elon Musk’s Terafab project aimed at scaling AI chip production, though its exact role remains unclear. The effort targets massive compute output for AI and robotics.

By Olivia Grant Edited by Maria Konash Published:
Intel Teams Up with Musk on Terafab to Scale AI Chip Production
Intel partners with Musk’s Terafab to scale AI chip output toward 1TW of compute. Image: Rubaitul Azad / Unsplash

Intel said Tuesday it will participate in Elon Musk’s “Terafab” initiative, a project focused on expanding semiconductor manufacturing and generating massive computing power for artificial intelligence and robotics. While the company did not disclose specific responsibilities, it confirmed the partnership in a post on X, highlighting its role in designing, fabricating, and packaging high-performance chips at scale. Intel shares rose about 2% following the announcement.

The Terafab project brings together several of Musk’s companies, including SpaceX, xAI, and Tesla, in an effort to rethink how advanced chips are produced. The initiative aims to deliver as much as 1 terawatt per year of compute capacity, a scale that reflects the rapidly increasing demands of AI systems. Intel’s contribution appears centered on its core strength in semiconductor manufacturing, particularly its ability to integrate chip design, fabrication, and advanced packaging technologies.

Although details remain limited, Intel’s involvement signals a potential shift in how large-scale AI infrastructure is developed. The company recently hosted Musk and xAI team members at its headquarters, suggesting early-stage collaboration and alignment. A photo shared by Intel showed Musk alongside CEO Lip-Bu Tan, reinforcing the strategic nature of the partnership.

Terafab’s ambition is notable even within the context of today’s AI boom. Producing 1 terawatt of compute annually would require vast manufacturing capacity and energy resources, far exceeding current deployments by most AI firms. The project appears designed to support Musk’s expanding ecosystem, including AI models developed by xAI, autonomous systems at Tesla, and data-intensive operations at SpaceX.

A New Model for AI Infrastructure

The collaboration reflects a broader trend toward tighter integration between chipmakers and AI developers. Instead of relying solely on external suppliers, companies are increasingly forming partnerships to secure dedicated compute capacity and optimize hardware for specific workloads. Intel’s manufacturing capabilities could complement Musk’s vertically integrated approach across hardware, software, and data.

At the same time, the move positions Intel more directly in competition with other semiconductor leaders benefiting from the AI surge. Nvidia has dominated the market for AI chips, while companies like AMD and Broadcom are expanding their roles through custom silicon and infrastructure partnerships. By aligning with Terafab, Intel may be seeking to strengthen its relevance in next-generation AI systems.

Scaling Beyond Traditional Limits

The Terafab initiative also highlights the industrial scale that AI development is reaching. Meeting the project’s compute targets would require advances not only in chip design but also in fabrication processes, supply chains, and energy efficiency. Intel emphasized that its ability to produce and package chips at scale will be central to achieving these goals.

For Musk’s companies, the project could provide greater control over critical infrastructure, reducing reliance on third-party suppliers and enabling faster iteration of AI systems. For the semiconductor industry, it points to a future where partnerships between chipmakers and AI firms become essential to meeting the growing demand for compute.

AI & Machine Learning, Cloud & Infrastructure, News

Google Expands Gemini With AI-Powered Mental Health Support

Google is adding new mental health features to Gemini, including crisis detection tools and direct hotline access. The company is also committing $30 million to expand global support services.

By Samantha Reed Edited by Maria Konash Published:

Google is expanding the role of its Gemini AI assistant in mental health support, introducing new features designed to connect users with crisis resources and human help. The update includes improved detection of sensitive conversations, a redesigned interface for accessing support, and new funding aimed at strengthening global mental health services. The move reflects growing use of AI tools in personal and emotional contexts, as well as increasing scrutiny over how such systems handle vulnerable users.

A key change is the introduction of a “Help is available” module within Gemini, which appears when conversations suggest a user may need mental health support. Developed with clinical experts, the feature aims to provide clearer and faster pathways to assistance. In more urgent situations, such as indications of self-harm or suicidal thoughts, Gemini will trigger a simplified interface offering one-touch access to crisis hotlines. Users can immediately call, text, chat, or visit support services, with prompts encouraging them to seek professional help. These options remain visible throughout the conversation once activated.

Google is also investing in the broader support ecosystem. Through Google.org, the company is committing $30 million over three years to help crisis hotlines expand their capacity globally. In addition, it is deepening its partnership with ReflexAI, providing $4 million in funding and integrating Gemini into tools used to train support staff. The collaboration includes enhancements to “Prepare,” a platform that uses AI simulations to help volunteers and professionals practice handling difficult conversations. Education-focused organizations are among the initial beneficiaries of this effort.

The company said it has refined how Gemini responds in sensitive scenarios. The system is designed to prioritize directing users to real-world help rather than acting as a substitute for professional care. It avoids reinforcing harmful behaviors or confirming false beliefs, while encouraging help-seeking in a measured and supportive tone. Google emphasized that Gemini is not intended to replace therapy or crisis services, but to guide users toward appropriate resources when needed.

A More Cautious Role for AI

The update highlights a broader shift in how AI companies approach mental health. As conversational tools become more widely used, companies face pressure to ensure systems respond responsibly in high-risk situations. Google’s approach focuses on limiting the AI’s role while strengthening connections to human support.

At the same time, the company is adding safeguards for younger users. These include restrictions preventing Gemini from presenting itself as a human-like companion, as well as measures to reduce the risk of emotional dependence. The system also avoids generating content that could encourage bullying or harmful interactions.

Expanding Access to Support

Google’s latest changes reflect a growing recognition that AI tools are increasingly part of everyday life, including moments of distress. By combining AI-driven detection with direct access to crisis services, the company is attempting to make support more immediate and accessible.

The initiative also underscores the scale of the challenge. With more than one billion people affected by mental health conditions globally, demand for support continues to outpace available resources. Google’s funding and partnerships aim to help bridge that gap, while positioning Gemini as a gateway to professional care rather than a replacement for it.

AI & Machine Learning, News

Broadcom Expands Google, Anthropic AI Chip Partnerships

Broadcom is expanding its role in AI infrastructure through new chip and compute deals with Google and Anthropic. The move reflects accelerating demand for large-scale AI capacity.

By Olivia Grant Edited by Maria Konash Published:
Broadcom Expands Google, Anthropic AI Chip Partnerships
Broadcom deepens AI push with Google chips and Anthropic deal, signaling surging infrastructure demand. Image: Laura Ockel / Unsplash

Broadcom is expanding its footprint in artificial intelligence infrastructure through new agreements with Google and Anthropic, underscoring the growing demand for compute power behind generative AI systems. The company said it will develop future versions of Google’s AI chips while also supporting a major expansion of Anthropic’s access to computing capacity. The updates, disclosed in a regulatory filing, pushed Broadcom shares up about 3% in extended trading.

At the center of the announcement is Broadcom’s continued work on Google’s tensor processing units, or TPUs, custom chips designed to train and run AI models at scale. While the companies have collaborated for years, the latest agreement signals a deeper alignment as competition intensifies among chipmakers and cloud providers. Custom silicon is becoming increasingly important as AI companies look for alternatives to general-purpose graphics processing units.

Broadcom is also scaling its relationship with Anthropic, one of the fastest-growing AI startups. The expanded deal will provide the company with access to roughly 3.5 gigawatts of compute capacity, primarily powered by Google’s TPU infrastructure. That marks a sharp increase from earlier deployments. Broadcom CEO Hock Tan recently said the company had already begun supplying around 1 gigawatt of compute to Anthropic, with demand expected to exceed 3 gigawatts by 2027.

Anthropic’s rapid growth helps explain the scale of the investment. The company said its annualized revenue has surpassed $30 billion, up from about $9 billion at the end of last year. It now counts more than 1,000 enterprise customers spending over $1 million annually, a figure that has doubled in just two months. Its Claude chatbot also saw a surge in popularity earlier this year, briefly becoming the most downloaded free app in Apple’s U.S. App Store.

The broader opportunity for Broadcom could be substantial. Analysts at Mizuho estimate the company may generate $21 billion in AI-related revenue from Anthropic in 2026, potentially doubling to $42 billion in 2027. While Broadcom did not disclose financial terms, the projections highlight how central large AI customers are becoming to semiconductor revenue growth.

A Shift Beyond GPUs

The deals also reflect a wider shift in how AI infrastructure is built. For years, companies like Anthropic and OpenAI have relied heavily on Nvidia GPUs accessed through cloud providers such as Amazon, Google, and Microsoft. That model is now evolving.

Broadcom is working with multiple AI developers, including OpenAI, on custom silicon tailored to specific workloads. At the same time, OpenAI has committed to using large volumes of AMD GPUs, signaling a diversification of suppliers. This mix of custom chips and alternative hardware suggests the AI ecosystem is moving toward more specialized and distributed infrastructure strategies.

Scaling the AI Backbone

The expansion of compute capacity into the gigawatt range highlights the industrial scale of modern AI. Training and deploying advanced models now requires vast energy, data center space, and specialized hardware. Much of Anthropic’s new infrastructure is expected to be located in the United States, reflecting both capacity needs and strategic considerations around data and supply chains.

For Broadcom, the partnerships reinforce its transition from a traditional semiconductor supplier into a key enabler of AI platforms. For the industry, they illustrate how the race to build and control AI infrastructure is becoming as critical as the development of the models themselves.

AI & Machine Learning, Cloud & Infrastructure, News

SpaceX IPO May Crowd Out 2026 Listings

SpaceX’s planned $75 billion IPO is expected to dominate investor attention, potentially sidelining other companies aiming to go public in 2026. Analysts warn the deal could tighten already fragile IPO market conditions.

By Samantha Reed Edited by Maria Konash Published: Updated:
SpaceX IPO May Crowd Out 2026 Listings
SpaceX’s $75B IPO could soak up demand and delay other listings, reshaping 2026 pipelines. Image: NASA / Unsplash

Elon Musk’s SpaceX is nearing a potential $75 billion initial public offering that could become one of the largest in history, but analysts warn the deal may disrupt the broader IPO market in 2026. Industry experts say the scale and visibility of the offering could absorb a significant share of investor capital and attention, making it harder for other companies to successfully go public in the same window. With U.S. IPO activity already lagging, the timing of the SpaceX debut is emerging as a critical factor for companies waiting to list after years of subdued market conditions.

According to data from Renaissance Capital, 35 IPOs have priced so far this year, marking a 37.5% decline compared with the same period last year. Market participants say a mega listing like SpaceX could intensify this slowdown. Large IPOs often act as focal points for institutional investors, drawing capital away from smaller or less prominent deals. Analysts compare the situation to Facebook’s 2012 IPO, which similarly dominated market attention and affected concurrent listings.

The SpaceX offering is expected to take place around June, a traditionally strong period for IPO activity before a seasonal summer slowdown. However, companies may delay their listings to avoid competing directly for visibility and capital. Bankers are already advising major clients to consider alternative timelines. At the same time, smaller IPOs could see limited upside from increased retail investor enthusiasm, benefiting from broader attention on public listings if SpaceX performs well.

Beyond SpaceX, the IPO pipeline includes other high-profile candidates. Reports indicate that AI firms OpenAI and Anthropic are considering public debuts later in the year. The clustering of such large offerings could further concentrate investor demand, raising the threshold for successful listings. Analysts at PitchBook suggest that the cumulative impact of these mega IPOs could push a fully open IPO window into 2027, delaying recovery for the broader market.

Why This Matters

The SpaceX IPO highlights how a single large deal can reshape market dynamics. For the IPO ecosystem, it underscores the importance of timing and investor allocation. Companies planning to go public may face tougher competition for capital, forcing delays or revised valuations.

For businesses, especially late-stage startups, the development could extend reliance on private funding or alternative financing routes. For investors, the concentration of capital into a few marquee deals may limit diversification opportunities in the short term. Retail investors, however, may see increased engagement with IPOs overall, particularly if high-profile listings perform strongly.

Context

The global IPO market has struggled to regain momentum after a prolonged downturn driven by rising interest rates and economic uncertainty. While 2026 was expected to mark a recovery, ongoing disruptions have complicated the outlook. These include geopolitical tensions such as the war in Iran, rising oil prices, concerns around private credit markets, and rapid AI-driven changes affecting legacy technology firms.

Against this backdrop, the emergence of multiple large-scale IPO candidates signals renewed activity but also introduces new challenges. Mega listings like SpaceX, along with potential offerings from leading AI companies like OpenAI and Anthropic, are likely to dominate market attention. This concentration could reshape not only the timing of IPOs but also investor behavior, setting the tone for public markets over the next several years.

Cursor 3 Launches Unified Workspace for AI Coding Agents

Cursor has launched Cursor 3, a redesigned workspace focused on managing AI coding agents. The update introduces multi-agent workflows, improved collaboration, and a unified development interface.

By Daniel Mercer Edited by Maria Konash Published:
Cursor 3 Launches Unified Workspace for AI Coding Agents
Cursor 3 launches unified AI coding workspace with multi-agent support and cloud integration. Image: Cursor

Cursor has introduced Cursor 3, a major update to its AI-powered development platform, reflecting a broader shift toward agent-driven software engineering. The release positions AI agents as central participants in the coding process, moving beyond traditional manual workflows.

The company describes software development as entering a new phase where autonomous agents handle a growing share of code generation and iteration. However, current workflows remain fragmented, with developers managing multiple tools, terminals, and agent interactions. Cursor 3 aims to address this by consolidating these processes into a single interface.

The new workspace is built around agent coordination rather than file-level editing. It allows developers to operate at a higher level of abstraction, focusing on outcomes while still retaining the ability to inspect and modify underlying code when needed.

A key feature is the ability to run multiple agents in parallel. These agents can operate across different environments, including local machines and cloud infrastructure, while remaining visible within a centralized interface. This reduces the need to track separate sessions or switch between tools.

Unified Interface and Workflow Integration

Cursor 3 introduces a multi-repository layout that enables teams and agents to collaborate across projects within the same workspace. The platform also supports seamless transitions between local and cloud environments, allowing developers to move agent tasks depending on performance needs or availability.

For example, long-running processes can be shifted to the cloud, while local environments can be used for testing and iteration. This flexibility is designed to improve efficiency and reduce interruptions during development workflows.

The update also includes improvements to code review and deployment processes. A redesigned diff view simplifies reviewing changes, while integrated tools allow users to stage, commit, and manage pull requests directly within the interface.

Cursor continues to build on its foundation as a standalone development environment, originally derived from a fork of Visual Studio Code. Cursor 3 expands this approach with additional features such as an integrated browser for testing applications, support for plugins through its marketplace, and enhanced navigation tools for exploring codebases.

The launch highlights a broader trend in developer tools, where AI systems are evolving from assistants into active collaborators. As models improve and agent capabilities expand, platforms like Cursor are focusing on orchestration and usability, aiming to make complex multi-agent workflows more accessible.

Cursor said it will continue investing in both its agent infrastructure and traditional IDE features, as it works toward a future where software systems are increasingly built and maintained through coordinated AI-driven processes.

AI & Machine Learning, News

Chinese Chipmakers See Record Revenue on AI Demand

Chinese chipmakers report record revenue driven by AI demand, memory shortages, and U.S. export curbs boosting domestic semiconductor growth.

By Olivia Grant Edited by Maria Konash Published:
Chinese Chipmakers See Record Revenue on AI Demand
Chinese chip firms post record revenue as AI demand and export curbs drive local adoption. Image: Igor Omilaev / Unsplash

Chinese semiconductor companies are reporting record revenues as demand for artificial intelligence infrastructure accelerates and U.S. export restrictions reshape global supply chains. The combined effect has boosted domestic chip production and strengthened Beijing’s push for technological self-sufficiency.

Semiconductor Manufacturing International Co. (SMIC), China’s largest chipmaker, reported a 16% year-over-year revenue increase to $9.3 billion in 2025, with projections exceeding $11 billion in 2026. Hua Hong also posted record quarterly revenue, reflecting strong demand across multiple chip segments.

The growth is being driven in part by domestic technology firms investing heavily in AI infrastructure. With limited access to advanced U.S. chips due to export controls, Chinese companies are increasingly turning to local suppliers to meet computing needs.

U.S. restrictions, particularly on high-performance GPUs and advanced semiconductor equipment, have accelerated China’s efforts to develop its own chip ecosystem. Analysts describe the restrictions as a catalyst that has intensified demand for domestically produced components across industries including AI, electric vehicles, and data centers.

Companies such as Moore Threads are benefiting from this shift, with the firm projecting more than 200% annual revenue growth as it works to position itself as a local alternative to global GPU leaders.

Memory Shortages and Technology Gaps Persist

In addition to logic chips, Chinese memory manufacturers are seeing significant gains. ChangXin Memory Technologies (CXMT) reported a sharp rise in revenue, driven by global shortages and rising demand for memory used in AI systems and consumer electronics.

High-bandwidth memory, a critical component for AI workloads, remains dominated by global players such as Samsung, SK Hynix, and Micron. However, export restrictions have created opportunities for domestic firms like CXMT to supply the Chinese market, even with older-generation technologies.

Despite strong revenue growth, Chinese semiconductor firms continue to lag behind global leaders in advanced manufacturing capabilities. Companies such as SMIC and Hua Hong are unable to produce cutting-edge chips at scale due to limited access to advanced lithography equipment from suppliers like ASML.

Efforts to build a fully domestic semiconductor supply chain are ongoing but face significant technical and financial challenges. China is attempting to replicate large portions of the global chip ecosystem, a process expected to take years.

While current growth is supported by import substitution and strong domestic demand, analysts warn of potential overcapacity in mature-node chips. Sustained progress will depend on whether Chinese firms can advance into higher-value segments, including next-generation memory and advanced logic chips, which are critical for long-term competitiveness in AI infrastructure.

AI & Machine Learning, Cloud & Infrastructure, News