Alibaba and China Telecom Launch AI Data Center Powered by Custom Chips

Alibaba and China Telecom are building a major AI data center powered by Alibaba’s own chips, marking a push toward domestic infrastructure amid U.S. restrictions.

By Olivia Grant Edited by Maria Konash Published:
Alibaba and China Telecom launch AI data center with Zhenwu chips, boosting China’s tech self-reliance. Image: İsmail Enes Ayhan / Unsplash

Alibaba and China Telecom have announced a new artificial intelligence data center in southern China powered by Alibaba’s in-house semiconductor technology, signaling a deeper push toward domestic AI infrastructure. The facility, located in Shaoguan in Guangdong province, will initially deploy 10,000 of Alibaba’s Zhenwu chips, designed for both AI training and inference. The project comes as China accelerates efforts to reduce reliance on foreign semiconductor technology amid ongoing U.S. export restrictions.

The Zhenwu chips are built to support large-scale AI models with hundreds of billions of parameters, placing them in the category of the most advanced systems currently in use. Alibaba said the facility will eventually scale to 100,000 chips, significantly expanding its compute capacity. The data center is expected to support a wide range of applications, including healthcare research, advanced materials development, and other industrial use cases.

The initiative reflects Alibaba’s vertically integrated AI strategy. Through its T-head semiconductor unit, the company designs its own chips, while also operating one of China’s largest cloud computing platforms. It develops AI models and delivers them through its cloud services, making infrastructure a key part of its growth. Cloud computing has been one of Alibaba’s fastest-growing segments in recent quarters, driven in part by rising demand for AI capabilities.

Alongside the infrastructure announcement, Alibaba CEO Eddie Wu introduced a new internal technology committee aimed at accelerating AI development. The group includes senior leadership across AI and cloud divisions, including the company’s chief AI architect and top technology executives. The move signals a coordinated push to strengthen Alibaba’s position in the rapidly evolving AI market.

Domestic Chips Take Center Stage

The project highlights China’s broader effort to build self-sufficient AI infrastructure. U.S. restrictions on advanced semiconductor exports, particularly high-performance AI chips from companies like Nvidia, have forced Chinese firms to invest heavily in domestic alternatives. Alibaba’s Zhenwu chips are part of that strategy, alongside similar efforts from companies such as Huawei.

Recent developments underscore this trend. A large computing cluster powered by Huawei’s Ascend 910C chips went online last month, further expanding China’s homegrown AI capabilities. These initiatives suggest a growing ecosystem of domestic hardware designed to support increasingly complex AI workloads.

A Different Approach to AI Scale

While U.S. technology companies are projected to spend hundreds of billions of dollars on AI infrastructure this year, Chinese firms are taking a more targeted approach. Rather than focusing solely on scale, companies like Alibaba are aligning investments with specific industries where AI can drive near-term revenue.

The new data center reflects that strategy, combining large-scale compute with practical applications across sectors. For China Telecom, the partnership strengthens its role in national digital infrastructure, while for Alibaba, it reinforces its position across chips, cloud, and AI services.

As global competition in AI intensifies, projects like this illustrate how regional strategies are diverging, with China prioritizing self-reliance and integration across the technology stack.

AI & Machine Learning, Cloud & Infrastructure, News

Alphabet CEO Eyes Bigger AI Investments as Opportunities Grow

Alphabet is ramping up large-scale AI investments as its early bet on SpaceX approaches a potential $100 billion return. CEO Sundar Pichai says the AI boom is creating new opportunities.

By Samantha Reed Edited by Maria Konash Published:
Alphabet CEO eyes bigger AI investments as valuations surge, with Pichai signaling new growth opportunities. Image: Stripe

Alphabet is preparing to expand its direct investments in artificial intelligence startups, buoyed by massive gains from earlier bets such as SpaceX. CEO Sundar Pichai said the company sees a growing number of opportunities to deploy capital as AI reshapes the technology landscape. His comments come as Alphabet’s 2015 investment in SpaceX could be worth around $100 billion, depending on future valuation milestones tied to a potential IPO.

Speaking in a conversation published Tuesday, Pichai pointed to SpaceX and Anthropic as examples of how early investments can scale alongside major technology shifts. Alphabet initially invested $900 million in SpaceX at a $12 billion valuation. Following a merger between SpaceX and xAI earlier this year, the combined entity has been valued as high as $1.25 trillion, with reports suggesting a future IPO could target $1.75 trillion. If Alphabet has maintained its stake, the return would rank among the most successful venture-style investments in the company’s history.

The company is now adapting its investment strategy to match the scale of the AI boom. Rather than relying solely on its venture arms GV and CapitalG, Alphabet is increasingly deploying capital directly from its balance sheet. This approach mirrors moves by other major technology firms, including Nvidia, Microsoft, and Amazon, as AI startups require significantly larger funding rounds than traditional venture deals.

Anthropic illustrates this shift. Alphabet invested $300 million in the AI startup in 2023, followed by an additional $2 billion later that year. Its total investment now exceeds $3 billion, with a reported ownership stake of about 14%. Over the same period, Anthropic’s valuation has surged to roughly $380 billion, reflecting rapid growth in demand for generative AI systems. The partnership also has strategic value, as Anthropic relies on Google’s cloud infrastructure and tensor processing units to run its models.

From Venture Bets to Strategic Capital

Pichai’s comments suggest Alphabet is moving beyond passive venture investing toward a more strategic model tied closely to its core business. Large AI investments can drive demand for its cloud services, custom chips, and infrastructure, creating a feedback loop between capital deployment and product growth.

This shift also reflects lessons from past investments. Pichai noted that Alphabet could have invested more heavily in its own autonomous vehicle unit, Waymo, at earlier stages. Waymo has since raised significant external funding, including a $16 billion round this year that valued the company at $126 billion.

A New Era of Mega Investments

The scale of current AI funding rounds is reshaping how tech giants allocate capital. Companies are increasingly making multi-billion-dollar investments to secure strategic partnerships and infrastructure demand, rather than pursuing smaller, diversified venture portfolios.

Alphabet’s experience with companies like Stripe, where early investments have grown significantly in value, reinforces the potential upside of this approach. But the AI era is raising the stakes, with fewer deals and much larger check sizes.

As competition intensifies, Alphabet’s willingness to invest aggressively could determine its position in the next phase of AI development, where capital, infrastructure, and partnerships are becoming as critical as the technology itself.

AI & Machine Learning, News, Startups & Investment

Anthropic Launches Glasswing to Deploy AI for Cyber Defense

Anthropic has launched Project Glasswing with major tech partners to use advanced AI for identifying and fixing software vulnerabilities. The move comes as AI models reach unprecedented offensive cyber capabilities.

By Marcus Lee Edited by Maria Konash Published:
Anthropic launches Project Glasswing with tech partners to boost AI-driven cybersecurity defense. Image: Anthropic

Anthropic has unveiled Project Glasswing, a large-scale cybersecurity initiative bringing together major technology and financial firms to use advanced AI models for defensive security. The project includes partners such as Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, Nvidia, and Palo Alto Networks. At its core is Claude Mythos Preview, an unreleased AI model that Anthropic says can identify and exploit software vulnerabilities at a level exceeding most human experts.

The initiative reflects growing concern that AI-driven cyber capabilities are advancing faster than existing defenses. According to Anthropic, Mythos Preview has already identified thousands of high-severity vulnerabilities across widely used systems, including operating systems, web browsers, and open-source infrastructure. In some cases, the model uncovered flaws that had remained undetected for decades, such as vulnerabilities in OpenBSD, FFmpeg, and the Linux kernel. Many of these issues have since been patched after disclosure to maintainers.

Project Glasswing aims to ensure these capabilities are used defensively. Participating organizations will deploy the model to scan their own systems and critical infrastructure, while Anthropic shares findings across the broader ecosystem. The company is committing up to $100 million in usage credits for the model, along with $4 million in direct funding for open-source security efforts. Access has also been extended to more than 40 additional organizations responsible for maintaining key software systems.

The urgency behind the effort stems from a shift in how vulnerabilities are discovered and exploited. Tasks that once required highly specialized expertise can now be automated by AI systems with advanced reasoning and coding abilities. Anthropic warns that the gap between vulnerability discovery and exploitation is shrinking rapidly, increasing the risk of widespread cyberattacks. The global cost of cybercrime is already estimated at around $500 billion annually, with threats ranging from attacks on hospitals and infrastructure to state-sponsored operations targeting national security systems.

Turning Offensive Capabilities Into Defense

Claude Mythos Preview represents a new class of AI systems capable of autonomously identifying and chaining together vulnerabilities to create sophisticated exploits. While this raises concerns about misuse, it also offers a powerful tool for defenders. By deploying the model proactively, organizations can identify and fix weaknesses before they are exploited.

Early testing by partners suggests the model can analyze complex codebases, perform penetration testing, and uncover vulnerabilities missed by traditional tools. Anthropic said it does not plan to release the model publicly, citing safety risks, but intends to develop safeguards that could enable broader deployment of similar systems in the future.

Industry-Wide Collaboration

Project Glasswing underscores the need for coordinated action across the technology industry. No single company can address the risks posed by AI-driven cyber threats alone, particularly as critical infrastructure increasingly depends on shared software components and open-source code.

The initiative also points to evolving cybersecurity practices. Anthropic and its partners plan to develop recommendations covering vulnerability disclosure, software updates, supply chain security, and automated patching. Governments are expected to play a role as well, particularly given the national security implications of advanced cyber capabilities.

As AI continues to reshape both offensive and defensive cybersecurity, Glasswing represents an early attempt to tip the balance in favor of defenders. Whether it succeeds may depend on how quickly the broader ecosystem adapts to a rapidly changing threat landscape.

Google Expands Gemini With AI-Powered Mental Health Support

Google is adding new mental health features to Gemini, including crisis detection tools and direct hotline access. The company is also committing $30 million to expand global support services.

By Samantha Reed Edited by Maria Konash Published:

Google is expanding the role of its Gemini AI assistant in mental health support, introducing new features designed to connect users with crisis resources and human help. The update includes improved detection of sensitive conversations, a redesigned interface for accessing support, and new funding aimed at strengthening global mental health services. The move reflects growing use of AI tools in personal and emotional contexts, as well as increasing scrutiny over how such systems handle vulnerable users.

A key change is the introduction of a “Help is available” module within Gemini, which appears when conversations suggest a user may need mental health support. Developed with clinical experts, the feature aims to provide clearer and faster pathways to assistance. In more urgent situations, such as indications of self-harm or suicidal thoughts, Gemini will trigger a simplified interface offering one-touch access to crisis hotlines. Users can immediately call, text, chat, or visit support services, with prompts encouraging them to seek professional help. These options remain visible throughout the conversation once activated.

Google is also investing in the broader support ecosystem. Through Google.org, the company is committing $30 million over three years to help crisis hotlines expand their capacity globally. In addition, it is deepening its partnership with ReflexAI, providing $4 million in funding and integrating Gemini into tools used to train support staff. The collaboration includes enhancements to “Prepare,” a platform that uses AI simulations to help volunteers and professionals practice handling difficult conversations. Education-focused organizations are among the initial beneficiaries of this effort.

The company said it has refined how Gemini responds in sensitive scenarios. The system is designed to prioritize directing users to real-world help rather than acting as a substitute for professional care. It avoids reinforcing harmful behaviors or confirming false beliefs, while encouraging help-seeking in a measured and supportive tone. Google emphasized that Gemini is not intended to replace therapy or crisis services, but to guide users toward appropriate resources when needed.

A More Cautious Role for AI

The update highlights a broader shift in how AI companies approach mental health. As conversational tools become more widely used, companies face pressure to ensure systems respond responsibly in high-risk situations. Google’s approach focuses on limiting the AI’s role while strengthening connections to human support.

At the same time, the company is adding safeguards for younger users. These include restrictions preventing Gemini from presenting itself as a human-like companion, as well as measures to reduce the risk of emotional dependence. The system also avoids generating content that could encourage bullying or harmful interactions.

Expanding Access to Support

Google’s latest changes reflect a growing recognition that AI tools are increasingly part of everyday life, including moments of distress. By combining AI-driven detection with direct access to crisis services, the company is attempting to make support more immediate and accessible.

The initiative also underscores the scale of the challenge. With more than one billion people affected by mental health conditions globally, demand for support continues to outpace available resources. Google’s funding and partnerships aim to help bridge that gap, while positioning Gemini as a gateway to professional care rather than a replacement for it.

AI & Machine Learning, News

Intel Teams Up with Musk on Terafab to Scale AI Chip Production

Intel has joined Elon Musk’s Terafab project aimed at scaling AI chip production, though its exact role remains unclear. The effort targets massive compute output for AI and robotics.

By Olivia Grant Edited by Maria Konash Published:
Intel partners with Musk’s Terafab to scale AI chip output toward 1TW of compute. Image: Rubaitul Azad / Unsplash

Intel said Tuesday it will participate in Elon Musk’s “Terafab” initiative, a project focused on expanding semiconductor manufacturing and generating massive computing power for artificial intelligence and robotics. While the company did not disclose specific responsibilities, it confirmed the partnership in a post on X, highlighting its role in designing, fabricating, and packaging high-performance chips at scale. Intel shares rose about 2% following the announcement.

The Terafab project brings together several of Musk’s companies, including SpaceX, xAI, and Tesla, in an effort to rethink how advanced chips are produced. The initiative aims to deliver as much as 1 terawatt per year of compute capacity, a scale that reflects the rapidly increasing demands of AI systems. Intel’s contribution appears centered on its core strength in semiconductor manufacturing, particularly its ability to integrate chip design, fabrication, and advanced packaging technologies.

Although details remain limited, Intel’s involvement signals a potential shift in how large-scale AI infrastructure is developed. The company recently hosted Musk and xAI team members at its headquarters, suggesting early-stage collaboration and alignment. A photo shared by Intel showed Musk alongside CEO Lip-Bu Tan, reinforcing the strategic nature of the partnership.

Terafab’s ambition is notable even within the context of today’s AI boom. Producing 1 terawatt of compute annually would require vast manufacturing capacity and energy resources, far exceeding current deployments by most AI firms. The project appears designed to support Musk’s expanding ecosystem, including AI models developed by xAI, autonomous systems at Tesla, and data-intensive operations at SpaceX.

A New Model for AI Infrastructure

The collaboration reflects a broader trend toward tighter integration between chipmakers and AI developers. Instead of relying solely on external suppliers, companies are increasingly forming partnerships to secure dedicated compute capacity and optimize hardware for specific workloads. Intel’s manufacturing capabilities could complement Musk’s vertically integrated approach across hardware, software, and data.

At the same time, the move positions Intel more directly in competition with other semiconductor leaders benefiting from the AI surge. Nvidia has dominated the market for AI chips, while companies like AMD and Broadcom are expanding their roles through custom silicon and infrastructure partnerships. By aligning with Terafab, Intel may be seeking to strengthen its relevance in next-generation AI systems.

Scaling Beyond Traditional Limits

The Terafab initiative also highlights the industrial scale that AI development is reaching. Meeting the project’s compute targets would require advances not only in chip design but also in fabrication processes, supply chains, and energy efficiency. Intel emphasized that its ability to produce and package chips at scale will be central to achieving these goals.

For Musk’s companies, the project could provide greater control over critical infrastructure, reducing reliance on third-party suppliers and enabling faster iteration of AI systems. For the semiconductor industry, it points to a future where partnerships between chipmakers and AI firms become essential to meeting the growing demand for compute.

AI & Machine Learning, Cloud & Infrastructure, News

Broadcom Expands Google, Anthropic AI Chip Partnerships

Broadcom is expanding its role in AI infrastructure through new chip and compute deals with Google and Anthropic. The move reflects accelerating demand for large-scale AI capacity.

By Olivia Grant Edited by Maria Konash Published:
Broadcom deepens AI push with Google chips and Anthropic deal, signaling surging infrastructure demand. Image: Laura Ockel / Unsplash

Broadcom is expanding its footprint in artificial intelligence infrastructure through new agreements with Google and Anthropic, underscoring the growing demand for compute power behind generative AI systems. The company said it will develop future versions of Google’s AI chips while also supporting a major expansion of Anthropic’s access to computing capacity. The updates, disclosed in a regulatory filing, pushed Broadcom shares up about 3% in extended trading.

At the center of the announcement is Broadcom’s continued work on Google’s tensor processing units, or TPUs, custom chips designed to train and run AI models at scale. While the companies have collaborated for years, the latest agreement signals a deeper alignment as competition intensifies among chipmakers and cloud providers. Custom silicon is becoming increasingly important as AI companies look for alternatives to general-purpose graphics processing units.

Broadcom is also scaling its relationship with Anthropic, one of the fastest-growing AI startups. The expanded deal will provide the company with access to roughly 3.5 gigawatts of compute capacity, primarily powered by Google’s TPU infrastructure. That marks a sharp increase from earlier deployments. Broadcom CEO Hock Tan recently said the company had already begun supplying around 1 gigawatt of compute to Anthropic, with demand expected to exceed 3 gigawatts by 2027.

Anthropic’s rapid growth helps explain the scale of the investment. The company said its annualized revenue has surpassed $30 billion, up from about $9 billion at the end of last year. It now counts more than 1,000 enterprise customers spending over $1 million annually, a figure that has doubled in just two months. Its Claude chatbot also saw a surge in popularity earlier this year, briefly becoming the most downloaded free app in Apple’s U.S. App Store.

The broader opportunity for Broadcom could be substantial. Analysts at Mizuho estimate the company may generate $21 billion in AI-related revenue from Anthropic in 2026, potentially doubling to $42 billion in 2027. While Broadcom did not disclose financial terms, the projections highlight how central large AI customers are becoming to semiconductor revenue growth.

A Shift Beyond GPUs

The deals also reflect a wider shift in how AI infrastructure is built. For years, companies like Anthropic and OpenAI have relied heavily on Nvidia GPUs accessed through cloud providers such as Amazon, Google, and Microsoft. That model is now evolving.

Broadcom is working with multiple AI developers, including OpenAI, on custom silicon tailored to specific workloads. At the same time, OpenAI has committed to using large volumes of AMD GPUs, signaling a diversification of suppliers. This mix of custom chips and alternative hardware suggests the AI ecosystem is moving toward more specialized and distributed infrastructure strategies.

Scaling the AI Backbone

The expansion of compute capacity into the gigawatt range highlights the industrial scale of modern AI. Training and deploying advanced models now requires vast energy, data center space, and specialized hardware. Much of Anthropic’s new infrastructure is expected to be located in the United States, reflecting both capacity needs and strategic considerations around data and supply chains.

For Broadcom, the partnerships reinforce its transition from a traditional semiconductor supplier into a key enabler of AI platforms. For the industry, they illustrate how the race to build and control AI infrastructure is becoming as critical as the development of the models themselves.

AI & Machine Learning, Cloud & Infrastructure, News
Exit mobile version