Alibaba and China Telecom Launch AI Data Center Powered by Custom Chips

Alibaba and China Telecom are building a major AI data center powered by Alibaba’s own chips, marking a push toward domestic infrastructure amid U.S. restrictions.

By Olivia Grant Edited by Maria Konash Published:
Alibaba and China Telecom launch AI data center with Zhenwu chips, boosting China’s tech self-reliance. Image: İsmail Enes Ayhan / Unsplash

Alibaba and China Telecom have announced a new artificial intelligence data center in southern China powered by Alibaba’s in-house semiconductor technology, signaling a deeper push toward domestic AI infrastructure. The facility, located in Shaoguan in Guangdong province, will initially deploy 10,000 of Alibaba’s Zhenwu chips, designed for both AI training and inference. The project comes as China accelerates efforts to reduce reliance on foreign semiconductor technology amid ongoing U.S. export restrictions.

The Zhenwu chips are built to support large-scale AI models with hundreds of billions of parameters, placing them in the category of the most advanced systems currently in use. Alibaba said the facility will eventually scale to 100,000 chips, significantly expanding its compute capacity. The data center is expected to support a wide range of applications, including healthcare research, advanced materials development, and other industrial use cases.

The initiative reflects Alibaba’s vertically integrated AI strategy. Through its T-head semiconductor unit, the company designs its own chips, while also operating one of China’s largest cloud computing platforms. It develops AI models and delivers them through its cloud services, making infrastructure a key part of its growth. Cloud computing has been one of Alibaba’s fastest-growing segments in recent quarters, driven in part by rising demand for AI capabilities.

Alongside the infrastructure announcement, Alibaba CEO Eddie Wu introduced a new internal technology committee aimed at accelerating AI development. The group includes senior leadership across AI and cloud divisions, including the company’s chief AI architect and top technology executives. The move signals a coordinated push to strengthen Alibaba’s position in the rapidly evolving AI market.

Domestic Chips Take Center Stage

The project highlights China’s broader effort to build self-sufficient AI infrastructure. U.S. restrictions on advanced semiconductor exports, particularly high-performance AI chips from companies like Nvidia, have forced Chinese firms to invest heavily in domestic alternatives. Alibaba’s Zhenwu chips are part of that strategy, alongside similar efforts from companies such as Huawei.

Recent developments underscore this trend. A large computing cluster powered by Huawei’s Ascend 910C chips went online last month, further expanding China’s homegrown AI capabilities. These initiatives suggest a growing ecosystem of domestic hardware designed to support increasingly complex AI workloads.

A Different Approach to AI Scale

While U.S. technology companies are projected to spend hundreds of billions of dollars on AI infrastructure this year, Chinese firms are taking a more targeted approach. Rather than focusing solely on scale, companies like Alibaba are aligning investments with specific industries where AI can drive near-term revenue.

The new data center reflects that strategy, combining large-scale compute with practical applications across sectors. For China Telecom, the partnership strengthens its role in national digital infrastructure, while for Alibaba, it reinforces its position across chips, cloud, and AI services.

As global competition in AI intensifies, projects like this illustrate how regional strategies are diverging, with China prioritizing self-reliance and integration across the technology stack.

AI & Machine Learning, Cloud & Infrastructure, News

Research: AI Is Reshaping Entry-Level Roles, Not Replacing Them

New research shows AI is speeding up how quickly early-career employees become productive, while raising expectations and reshaping traditional entry-level roles.

By Maria Konash Published:
AI accelerates early-career productivity, reshaping entry-level roles and expectations. Image: Sam Balye / Unsplash

Artificial intelligence is rapidly transforming the early stages of professional work, not by eliminating entry-level roles, but by accelerating how quickly new hires become productive. According to new research from SAP and Wakefield, 88% of chief human resources officers say AI is helping early-career employees become role-ready faster. The shift is changing how companies onboard talent and redefining expectations for new hires from their first days on the job.

Traditionally, entry-level roles relied on repetitive, lower-risk tasks to help employees learn workflows and build experience over time. Increasingly, those tasks are being automated by AI systems. As a result, early-career employees are stepping into higher-value work much sooner. The study found that 79% of organizations provide enterprise AI tools to new hires within their first month, while 87% expect employees to either arrive with AI skills or quickly develop them.

This acceleration is already affecting performance outcomes. More than half of surveyed HR leaders reported increased confidence and productivity among early-career employees using AI tools. However, the faster ramp-up also comes with rising expectations. Companies are hiring fewer entry-level workers, but expecting those they do hire to contribute more strategically and handle complex tasks earlier in their tenure.

The shift is creating new pressures. Without traditional learning buffers, new hires face higher cognitive demands as they manage AI-driven workflows. Some researchers describe this as “AI brain fry,” referring to the mental strain associated with keeping pace with accelerated work environments. At the same time, gaps in guidance are leading to increased use of unauthorized tools, with 56% of HR leaders reporting “shadow AI” adoption among early-career staff.

Other risks are emerging. Uneven access to AI tools across teams is contributing to higher attrition risk, according to 44% of respondents. Meanwhile, 38% of leaders expressed concern that foundational skills such as communication, critical thinking, and collaboration may be underdeveloped as AI takes over routine tasks.

Rethinking the First Step Into Work

The findings suggest that organizations must redesign entry-level roles rather than eliminate them. With fewer opportunities for gradual, task-based learning, companies are being pushed to create more structured development pathways. This includes emphasizing project-based work, clearer decision-making frameworks, and more consistent coaching focused on judgment and prioritization.

There is also a growing need for formal AI governance. Introducing clear guidelines during onboarding and reinforcing best practices can help reduce misuse and ensure employees understand how to use AI responsibly. Ensuring equal access to tools and training is equally important, as disparities can increase stress and limit performance.

A New Balance of Skills

The research points to a broader shift in what defines early-career success. Technical fluency with AI is becoming a baseline expectation, but it is not sufficient on its own. Human skills such as communication, collaboration, and critical thinking are becoming more valuable as employees take on higher-level responsibilities earlier.

For businesses, the ability to harness this accelerated productivity could drive faster innovation and efficiency. But without proper structure and support, the same forces could lead to burnout, skill gaps, and higher turnover.

As AI continues to reshape the workplace, the challenge is no longer whether entry-level roles will exist, but how they will evolve to balance speed, capability, and long-term development.

AI & Machine Learning, News

Alphabet CEO Eyes Bigger AI Investments as Opportunities Grow

Alphabet is ramping up large-scale AI investments as its early bet on SpaceX approaches a potential $100 billion return. CEO Sundar Pichai says the AI boom is creating new opportunities.

By Samantha Reed Edited by Maria Konash Published: Updated:
Alphabet CEO eyes bigger AI investments as valuations surge, with Pichai signaling new growth opportunities. Image: Stripe

Alphabet is preparing to expand its direct investments in artificial intelligence startups, buoyed by massive gains from earlier bets such as SpaceX. CEO Sundar Pichai said the company sees a growing number of opportunities to deploy capital as AI reshapes the technology landscape. His comments come as Alphabet’s 2015 investment in SpaceX could be worth around $100 billion, depending on future valuation milestones tied to a potential IPO.

Speaking in a conversation published Tuesday, Pichai pointed to SpaceX and Anthropic as examples of how early investments can scale alongside major technology shifts. Alphabet initially invested $900 million in SpaceX at a $12 billion valuation. Following a merger between SpaceX and xAI earlier this year, the combined entity has been valued as high as $1.25 trillion, with reports suggesting a future IPO could target $1.75 trillion. If Alphabet has maintained its stake, the return would rank among the most successful venture-style investments in the company’s history.

The company is now adapting its investment strategy to match the scale of the AI boom. Rather than relying solely on its venture arms GV and CapitalG, Alphabet is increasingly deploying capital directly from its balance sheet. This approach mirrors moves by other major technology firms, including Nvidia, Microsoft, and Amazon, as AI startups require significantly larger funding rounds than traditional venture deals.

Anthropic illustrates this shift. Alphabet invested $300 million in the AI startup in 2023, followed by an additional $2 billion later that year. Its total investment now exceeds $3 billion, with a reported ownership stake of about 14%. Over the same period, Anthropic’s valuation has surged to roughly $380 billion, reflecting rapid growth in demand for generative AI systems. The partnership also has strategic value, as Anthropic relies on Google’s cloud infrastructure and tensor processing units to run its models.

From Venture Bets to Strategic Capital

Pichai’s comments suggest Alphabet is moving beyond passive venture investing toward a more strategic model tied closely to its core business. Large AI investments can drive demand for its cloud services, custom chips, and infrastructure, creating a feedback loop between capital deployment and product growth.

This shift also reflects lessons from past investments. Pichai noted that Alphabet could have invested more heavily in its own autonomous vehicle unit, Waymo, at earlier stages. Waymo has since raised significant external funding, including a $16 billion round this year that valued the company at $126 billion.

A New Era of Mega Investments

The scale of current AI funding rounds is reshaping how tech giants allocate capital. Companies are increasingly making multi-billion-dollar investments to secure strategic partnerships and infrastructure demand, rather than pursuing smaller, diversified venture portfolios.

Alphabet’s experience with companies like Stripe, where early investments have grown significantly in value, reinforces the potential upside of this approach. But the AI era is raising the stakes, with fewer deals and much larger check sizes.

As competition intensifies, Alphabet’s willingness to invest aggressively could determine its position in the next phase of AI development, where capital, infrastructure, and partnerships are becoming as critical as the technology itself.

Anthropic Launches Glasswing to Deploy AI for Cyber Defense

Anthropic has launched Project Glasswing with major tech partners to use advanced AI for identifying and fixing software vulnerabilities. The move comes as AI models reach unprecedented offensive cyber capabilities.

By Marcus Lee Edited by Maria Konash Published:
Anthropic launches Project Glasswing with tech partners to boost AI-driven cybersecurity defense. Image: Anthropic

Anthropic has unveiled Project Glasswing, a large-scale cybersecurity initiative bringing together major technology and financial firms to use advanced AI models for defensive security. The project includes partners such as Amazon Web Services, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, Nvidia, and Palo Alto Networks. At its core is Claude Mythos Preview, an unreleased AI model that Anthropic says can identify and exploit software vulnerabilities at a level exceeding most human experts.

The initiative reflects growing concern that AI-driven cyber capabilities are advancing faster than existing defenses. According to Anthropic, Mythos Preview has already identified thousands of high-severity vulnerabilities across widely used systems, including operating systems, web browsers, and open-source infrastructure. In some cases, the model uncovered flaws that had remained undetected for decades, such as vulnerabilities in OpenBSD, FFmpeg, and the Linux kernel. Many of these issues have since been patched after disclosure to maintainers.

Project Glasswing aims to ensure these capabilities are used defensively. Participating organizations will deploy the model to scan their own systems and critical infrastructure, while Anthropic shares findings across the broader ecosystem. The company is committing up to $100 million in usage credits for the model, along with $4 million in direct funding for open-source security efforts. Access has also been extended to more than 40 additional organizations responsible for maintaining key software systems.

The urgency behind the effort stems from a shift in how vulnerabilities are discovered and exploited. Tasks that once required highly specialized expertise can now be automated by AI systems with advanced reasoning and coding abilities. Anthropic warns that the gap between vulnerability discovery and exploitation is shrinking rapidly, increasing the risk of widespread cyberattacks. The global cost of cybercrime is already estimated at around $500 billion annually, with threats ranging from attacks on hospitals and infrastructure to state-sponsored operations targeting national security systems.

Turning Offensive Capabilities Into Defense

Claude Mythos Preview represents a new class of AI systems capable of autonomously identifying and chaining together vulnerabilities to create sophisticated exploits. While this raises concerns about misuse, it also offers a powerful tool for defenders. By deploying the model proactively, organizations can identify and fix weaknesses before they are exploited.

Early testing by partners suggests the model can analyze complex codebases, perform penetration testing, and uncover vulnerabilities missed by traditional tools. Anthropic said it does not plan to release the model publicly, citing safety risks, but intends to develop safeguards that could enable broader deployment of similar systems in the future.

Industry-Wide Collaboration

Project Glasswing underscores the need for coordinated action across the technology industry. No single company can address the risks posed by AI-driven cyber threats alone, particularly as critical infrastructure increasingly depends on shared software components and open-source code.

The initiative also points to evolving cybersecurity practices. Anthropic and its partners plan to develop recommendations covering vulnerability disclosure, software updates, supply chain security, and automated patching. Governments are expected to play a role as well, particularly given the national security implications of advanced cyber capabilities.

As AI continues to reshape both offensive and defensive cybersecurity, Glasswing represents an early attempt to tip the balance in favor of defenders. Whether it succeeds may depend on how quickly the broader ecosystem adapts to a rapidly changing threat landscape.

Google Expands Gemini With AI-Powered Mental Health Support

Google is adding new mental health features to Gemini, including crisis detection tools and direct hotline access. The company is also committing $30 million to expand global support services.

By Samantha Reed Edited by Maria Konash Published:

Google is expanding the role of its Gemini AI assistant in mental health support, introducing new features designed to connect users with crisis resources and human help. The update includes improved detection of sensitive conversations, a redesigned interface for accessing support, and new funding aimed at strengthening global mental health services. The move reflects growing use of AI tools in personal and emotional contexts, as well as increasing scrutiny over how such systems handle vulnerable users.

A key change is the introduction of a “Help is available” module within Gemini, which appears when conversations suggest a user may need mental health support. Developed with clinical experts, the feature aims to provide clearer and faster pathways to assistance. In more urgent situations, such as indications of self-harm or suicidal thoughts, Gemini will trigger a simplified interface offering one-touch access to crisis hotlines. Users can immediately call, text, chat, or visit support services, with prompts encouraging them to seek professional help. These options remain visible throughout the conversation once activated.

Google is also investing in the broader support ecosystem. Through Google.org, the company is committing $30 million over three years to help crisis hotlines expand their capacity globally. In addition, it is deepening its partnership with ReflexAI, providing $4 million in funding and integrating Gemini into tools used to train support staff. The collaboration includes enhancements to “Prepare,” a platform that uses AI simulations to help volunteers and professionals practice handling difficult conversations. Education-focused organizations are among the initial beneficiaries of this effort.

The company said it has refined how Gemini responds in sensitive scenarios. The system is designed to prioritize directing users to real-world help rather than acting as a substitute for professional care. It avoids reinforcing harmful behaviors or confirming false beliefs, while encouraging help-seeking in a measured and supportive tone. Google emphasized that Gemini is not intended to replace therapy or crisis services, but to guide users toward appropriate resources when needed.

A More Cautious Role for AI

The update highlights a broader shift in how AI companies approach mental health. As conversational tools become more widely used, companies face pressure to ensure systems respond responsibly in high-risk situations. Google’s approach focuses on limiting the AI’s role while strengthening connections to human support.

At the same time, the company is adding safeguards for younger users. These include restrictions preventing Gemini from presenting itself as a human-like companion, as well as measures to reduce the risk of emotional dependence. The system also avoids generating content that could encourage bullying or harmful interactions.

Expanding Access to Support

Google’s latest changes reflect a growing recognition that AI tools are increasingly part of everyday life, including moments of distress. By combining AI-driven detection with direct access to crisis services, the company is attempting to make support more immediate and accessible.

The initiative also underscores the scale of the challenge. With more than one billion people affected by mental health conditions globally, demand for support continues to outpace available resources. Google’s funding and partnerships aim to help bridge that gap, while positioning Gemini as a gateway to professional care rather than a replacement for it.

AI & Machine Learning, News

Intel Teams Up with Musk on Terafab to Scale AI Chip Production

Intel has joined Elon Musk’s Terafab project aimed at scaling AI chip production, though its exact role remains unclear. The effort targets massive compute output for AI and robotics.

By Olivia Grant Edited by Maria Konash Published:
Intel partners with Musk’s Terafab to scale AI chip output toward 1TW of compute. Image: Rubaitul Azad / Unsplash

Intel said Tuesday it will participate in Elon Musk’s “Terafab” initiative, a project focused on expanding semiconductor manufacturing and generating massive computing power for artificial intelligence and robotics. While the company did not disclose specific responsibilities, it confirmed the partnership in a post on X, highlighting its role in designing, fabricating, and packaging high-performance chips at scale. Intel shares rose about 2% following the announcement.

The Terafab project brings together several of Musk’s companies, including SpaceX, xAI, and Tesla, in an effort to rethink how advanced chips are produced. The initiative aims to deliver as much as 1 terawatt per year of compute capacity, a scale that reflects the rapidly increasing demands of AI systems. Intel’s contribution appears centered on its core strength in semiconductor manufacturing, particularly its ability to integrate chip design, fabrication, and advanced packaging technologies.

Although details remain limited, Intel’s involvement signals a potential shift in how large-scale AI infrastructure is developed. The company recently hosted Musk and xAI team members at its headquarters, suggesting early-stage collaboration and alignment. A photo shared by Intel showed Musk alongside CEO Lip-Bu Tan, reinforcing the strategic nature of the partnership.

Terafab’s ambition is notable even within the context of today’s AI boom. Producing 1 terawatt of compute annually would require vast manufacturing capacity and energy resources, far exceeding current deployments by most AI firms. The project appears designed to support Musk’s expanding ecosystem, including AI models developed by xAI, autonomous systems at Tesla, and data-intensive operations at SpaceX.

A New Model for AI Infrastructure

The collaboration reflects a broader trend toward tighter integration between chipmakers and AI developers. Instead of relying solely on external suppliers, companies are increasingly forming partnerships to secure dedicated compute capacity and optimize hardware for specific workloads. Intel’s manufacturing capabilities could complement Musk’s vertically integrated approach across hardware, software, and data.

At the same time, the move positions Intel more directly in competition with other semiconductor leaders benefiting from the AI surge. Nvidia has dominated the market for AI chips, while companies like AMD and Broadcom are expanding their roles through custom silicon and infrastructure partnerships. By aligning with Terafab, Intel may be seeking to strengthen its relevance in next-generation AI systems.

Scaling Beyond Traditional Limits

The Terafab initiative also highlights the industrial scale that AI development is reaching. Meeting the project’s compute targets would require advances not only in chip design but also in fabrication processes, supply chains, and energy efficiency. Intel emphasized that its ability to produce and package chips at scale will be central to achieving these goals.

For Musk’s companies, the project could provide greater control over critical infrastructure, reducing reliance on third-party suppliers and enabling faster iteration of AI systems. For the semiconductor industry, it points to a future where partnerships between chipmakers and AI firms become essential to meeting the growing demand for compute.

AI & Machine Learning, Cloud & Infrastructure, News
Exit mobile version