Japan’s Toto, Best Known for Toilets, Sees Shares Surge on AI Chip Boom

Toto Ltd. shares jumped after strong semiconductor growth tied to AI demand. The shift highlights how legacy manufacturers are benefiting from the global AI infrastructure boom.

By Olivia Grant Edited by Maria Konash Published:
Toto shares surge as AI chip demand boosts semiconductor unit, offsetting core business slowdown. Image: John Cameron / Unsplash

Toto Ltd., best known globally for its high-tech toilets, saw its shares rise 18% to a five-year high following a strong earnings report driven by semiconductor demand. The Japanese company reported that its chip-related business is expanding rapidly, fueled by the global surge in artificial intelligence infrastructure. This comes as demand for memory chips, essential for AI systems, continues to accelerate. The company is now doubling down on this segment with new investments aimed at scaling production and research.

Toto announced plans to invest about $190 million to expand its semiconductor component operations and strengthen research and development. The company produces electrostatic chucks, specialized components used to hold silicon wafers in place during the manufacturing of NAND flash memory. Toto is currently the second-largest producer of these components globally. Sales in its semiconductor division grew 34% year over year, and the unit now contributes more than half of the company’s operating profit, reflecting a significant shift in its business mix.

The pivot is not entirely new but represents an acceleration of an existing strategy. While Toto’s toilet division remains widely recognized for its advanced features, including automated cleaning and deodorizing systems, that segment is facing headwinds. Supply chain disruptions tied to material shortages, particularly adhesives and plastics linked to the Middle East energy crisis, have forced the company to halt new orders temporarily. This contrast has made its semiconductor business increasingly central to overall performance.

What It Means

Toto’s results underscore how the AI boom is reshaping supply chains beyond traditional tech companies. Demand for memory chips, especially NAND flash used in data centers, is creating opportunities for component suppliers that were previously niche players. For businesses, this signals a broader shift where industrial and manufacturing firms can gain relevance in the AI economy. For investors, it highlights how companies with indirect exposure to AI infrastructure may see outsized gains. End users may not interact directly with these components, but they underpin faster and more capable AI systems.

Industry Backdrop

The surge in AI investment has triggered a global race to expand semiconductor capacity, particularly in memory and processing components. Companies across the supply chain, from chipmakers to materials providers, are scaling operations to meet demand. Toto’s move mirrors a wider trend of diversification among traditional manufacturers seeking growth in high-tech sectors. At the same time, supply chain fragility remains a concern, as seen in Toto’s core business challenges. The company’s dual exposure to consumer products and semiconductor infrastructure reflects the evolving intersection of legacy industries and emerging technologies.

AI & Machine Learning, Cloud & Infrastructure, News

Anthropic Partners With Blackstone, Goldman to Launch $1.5B AI Venture

Anthropic and major investment firms are forming a $1.5 billion AI services venture to deploy Claude across portfolio companies. The move targets mid-sized firms lacking in-house AI expertise.

By Maria Konash Published:
Anthropic teams with Blackstone and Goldman Sachs on $1.5B venture to expand Claude across enterprises. Image: Anthropic

Anthropic has teamed up with Blackstone, Goldman Sachs, and Hellman & Friedman to launch a new AI services company backed by roughly $1.5 billion. The joint venture will focus on deploying Anthropic’s Claude AI system within portfolio companies owned by these firms. The initiative reflects growing demand for hands-on AI integration, particularly among mid-sized organizations that lack internal technical resources. The announcement comes as enterprises accelerate efforts to embed AI into core operations rather than experiment at the margins.

Each of the primary partners is contributing capital to the venture. Anthropic, Blackstone, and Hellman & Friedman are investing $300 million each, while Goldman Sachs is committing $150 million. The company is also supported by a broader group of asset managers, including General Atlantic, Apollo Global Management, GIC, Leonard Green & Partners, and Sequoia Capital. The structure is designed to combine capital with operational expertise, allowing the new entity to work directly with management teams inside portfolio companies.

The venture will provide end-to-end AI implementation services, including training staff, integrating Claude into workflows, and redesigning internal processes. Anthropic’s applied AI engineers will collaborate with the firm’s own teams to identify high-impact use cases and build customized solutions. Early examples include healthcare organizations, where AI tools could automate documentation, coding, and compliance tasks, freeing up staff time for patient care. Engagements will typically begin with small teams embedded within client organizations and expand over time based on results.

Strategic Impact

This initiative signals a shift in enterprise AI adoption from experimentation to execution. Rather than selling software licenses alone, AI companies and investors are moving toward integrated service models that combine technology, capital, and operational change. For private equity firms, embedding AI across portfolio companies could improve efficiency and returns. For businesses, especially mid-sized firms, the model lowers the barrier to adopting advanced AI systems by providing both tools and expertise. End users may see faster service delivery and reduced administrative burdens as AI becomes embedded in everyday workflows.

Industry Context

The launch builds on Anthropic’s broader push to expand its Claude Partner Network, which includes consulting and systems integration firms such as Accenture, Deloitte, and PwC. These partnerships have focused primarily on large enterprises, leaving a gap in the mid-market segment that this new venture aims to address.

The move also reflects intensifying competition among AI providers to secure enterprise footholds, as companies seek not just models but practical deployment strategies.

The expansion also comes as Anthropic explores a significantly larger funding round amid strong investor demand and rapid revenue growth. Reports indicate the company could seek up to $50 billion in new capital at a valuation approaching $900 billion, underscoring the scale of interest in AI infrastructure and services as adoption accelerates.

Oracle Layoffs Reveal ‘Train Then Replace’ AI Strategy

Former Oracle employees say they were asked to help train internal AI systems before being laid off, as the company shifts resources toward data centers and AI infrastructure.

By Samantha Reed Edited by Maria Konash Published:
Oracle layoffs hit thousands as it pivots to AI, with reports workers trained systems before being cut. Image: BoliviaInteligente / Unsplash

Oracle has laid off up to 30,000 employees over the past month, according to reports, as the company accelerates its pivot toward artificial intelligence and large-scale data center infrastructure. Former workers told TIME that some teams were first instructed to document workflows and use internal AI tools, only to be dismissed shortly afterward.

The layoffs come as Oracle increases investment in AI infrastructure, including its role alongside OpenAI and Nvidia in the Stargate project, a large-scale initiative aimed at expanding computing capacity. Analysts previously estimated that cutting 20,000–30,000 jobs could generate $8–10 billion in additional free cash flow, which could be redirected toward data center construction.

Workers Describe “Train Then Replace” Dynamic

Several former employees said they felt they were effectively helping build systems that would later reduce the need for their roles. Teams were encouraged, or required, to use internal AI tools and document processes that could be automated.

In some cases, employees reported that these tools did not improve productivity and instead created additional work, such as debugging AI-generated code or correcting inaccurate outputs. Others described increased workloads, with expectations rising even as headcount declined.

Financial and Personal Impact

Beyond job loss, many workers also lost significant compensation tied to unvested stock. One example cited involved a long-time employee losing approximately $300,000 in restricted stock units after termination.

For employees on work visas, the layoffs introduced additional risk, including limited time to secure new employment or leave the country. Some former staff also reported losing healthcare coverage or facing reduced severance compared to industry peers.

Strategic Shift Toward AI Infrastructure

Oracle’s leadership has been explicit about prioritizing AI. Chairman Larry Ellison has emphasized that companies building AI infrastructure are likely to dominate future markets. The company is reportedly committing tens of billions of dollars to expand data center capacity, even as it faces the prospect of negative cash flow through the end of the decade.

The layoffs reflect a broader trend across the tech industry, where companies are reallocating resources from traditional roles toward AI development and infrastructure. Similar dynamics are visible at Meta, which recently announced plans to cut around 8,000 jobs while ramping AI-related spending to as much as $135 billion, underscoring how the push for AI-driven productivity is reshaping both investment priorities and the workforce across the industry.

AI & Machine Learning, Cloud & Infrastructure, News

China’s AI ‘Digital Ex’ Trend Blurs Lines Between Memory and Privacy

A growing trend in China allows users to create AI replicas of former partners using personal data. The practice is raising concerns about privacy, emotional dependency, and relationships.

By Samantha Reed Edited by Maria Konash Published:
China’s “digital ex” AI trend creates virtual replicas from photos and data, raising privacy and emotional concerns. Image: Kelly Sikkema / Unsplash

A new trend in China is seeing users create AI-generated replicas of former romantic partners by uploading personal data such as chat logs, photos, and social media content. These systems generate virtual models that mimic speech patterns, personality traits, and communication styles, allowing users to interact with a digital version of their ex. The phenomenon has gained traction among younger users seeking ways to process breakups and unresolved emotions.

The technology builds on tools originally designed for workplace productivity, such as systems that convert communication data into reusable AI “skills.” Developers adapted these tools to personal relationships, enabling users to simulate conversations and interactions based on past experiences. Some platforms allow further customization by adding memories, behavioral details, and shared experiences, making the replicas more realistic over time. In some cases, users integrate these AI models into messaging apps to continue conversations in a familiar format.

Advocates say the approach can provide emotional relief, helping users reflect on past relationships or find closure. Some users report that interacting with a digital version of a former partner allows them to express unresolved feelings or reassess the relationship more objectively. Others see it as a way to gradually detach from emotional dependence by confronting idealized memories.

Emotional and Social Risks

Critics warn that the trend could create new forms of emotional dependency. Interacting with AI replicas may blur the boundary between past and present relationships, potentially complicating users’ ability to move forward. Some experts also raise concerns about “emotional infidelity,” particularly if individuals engage with digital versions of former partners while in new relationships.

There are also concerns about how realistic simulations may influence perception and memory. By selectively reinforcing certain traits or interactions, AI replicas could reshape how users remember past relationships, potentially distorting emotional outcomes.

Privacy and Legal Concerns

The use of personal data to create digital replicas has prompted legal and ethical questions. Uploading chat histories or social media content without consent may violate data protection laws, according to legal experts. The issue is particularly sensitive when the recreated individual has not agreed to their likeness or communication style being used.

The trend reflects a broader shift in how AI is being applied to personal and emotional contexts. As similar technologies are used to recreate deceased individuals or simulate relationships, questions around consent, identity, and psychological impact are becoming more prominent.

AI & Machine Learning, News

Anthropic Eyes $50B Raise as Valuation Nears $900B

Anthropic is considering a major funding round amid strong investor demand and rapid revenue growth. The potential raise could value the company at up to $900 billion.

By Samantha Reed Edited by Maria Konash Published:
Anthropic eyes up to $50B raise at $850B-$900B valuation as revenue nears $40B. Image: Anthropic

Anthropic is facing intense investor demand as it considers a new funding round that could raise between $40 billion and $50 billion at a valuation of $850 billion to $900 billion. Multiple preemptive offers have been made to the company, according to sources familiar with the matter, reflecting strong interest ahead of a potential initial public offering. A final decision on whether to proceed with the round is expected at a board meeting in May.

The surge in investor interest is driven by Anthropic’s rapid revenue growth. The company recently reported an annual revenue run rate exceeding $30 billion, up from about $9 billion at the end of 2025, with some estimates placing the current figure closer to $40 billion. Much of this growth is attributed to demand for its AI coding products, including Claude Code and Cowork, which are gaining traction among enterprise users.

Anthropic’s last funding round in February valued the company at $380 billion. If the new round proceeds at the reported terms, it would more than double that valuation and bring Anthropic in line with or ahead of competitors such as OpenAI, which recently raised capital at an $852 billion valuation. Investor appetite appears to exceed supply, with some institutions reportedly seeking multibillion-dollar allocations without securing meetings with company leadership.

Investor Momentum

The scale of interest highlights the growing competition among investors to gain exposure to leading AI companies. Anthropic’s positioning in areas such as coding assistance and enterprise AI tools has made it a key target for capital allocation. The company’s ability to generate substantial revenue early in its lifecycle has further strengthened its appeal.

For investors, the potential round represents an opportunity to participate in one of the largest private funding events in the technology sector. However, the size of the valuation also raises questions about sustainability and long-term returns, particularly as the company approaches a possible public listing.

Market Context

The development comes amid a broader surge in AI investment, with major players raising large amounts of capital to fund infrastructure, research, and product expansion. Companies are competing to scale their models and capture enterprise demand across industries such as finance, healthcare, and life sciences.

Anthropic’s rapid growth and funding momentum reflect the accelerating pace of the AI market. As companies prepare for public offerings, investor expectations are increasingly tied to revenue growth and the ability to translate technical advances into commercial success.

AI & Machine Learning, News, Startups & Investment

Microsoft Defends OpenAI Deal as AI Revenue Hits $37 Billion

Microsoft says its revised OpenAI partnership strengthens flexibility while maintaining key advantages. The company reported AI revenue surpassing $37 billion amid growing multi-model demand.

By Samantha Reed Edited by Maria Konash Published:
Microsoft says revised OpenAI deal boosts flexibility while keeping key advantages, with AI revenue topping $37B. Image: BoliviaInteligente / Unsplash

Microsoft CEO Satya Nadella defended the company’s revised partnership with OpenAI, stating the updated agreement remains beneficial despite ending exclusivity. Speaking after earnings, Nadella emphasized that Microsoft retains access to OpenAI’s intellectual property, including its most advanced models and agent technologies, through 2032. Under the new terms, Microsoft no longer pays for that access, marking a shift in how the partnership is structured.

The changes come as OpenAI expands relationships with other cloud providers, including Amazon, raising questions about Microsoft’s competitive position. Nadella dismissed concerns that the loss of exclusivity would weaken Microsoft’s standing, noting that the company continues to benefit from multiple aspects of the relationship. These include OpenAI’s commitment to spend more than $250 billion on Microsoft’s cloud services and Microsoft’s equity stake in the AI company.

Microsoft also reported strong financial performance tied to artificial intelligence. The company’s AI business has surpassed an annual revenue run rate of $37 billion, representing 123 percent year-over-year growth. Nadella highlighted that OpenAI remains a significant customer for Microsoft’s infrastructure, alongside its role as a technology partner. He also pointed to broader enterprise demand for diverse AI models rather than reliance on a single provider.

Multi-Model Strategy

Microsoft’s approach reflects a shift toward offering a range of AI models within its cloud ecosystem. Nadella said customers increasingly use multiple models depending on their needs, with more than 10,000 clients already adopting multi-model strategies. This includes access to technologies from OpenAI, Anthropic, and open-source alternatives.

This diversification reduces reliance on any single partner while positioning Microsoft as a platform provider rather than a single-model ecosystem. It also aligns with enterprise preferences for flexibility, particularly as organizations experiment with different AI capabilities across workloads.

Competitive Landscape

The revised partnership highlights changing dynamics in the AI industry, where alliances are becoming less exclusive. OpenAI’s expansion to other cloud providers and Microsoft’s parallel investments in alternative models indicate a more distributed ecosystem. Cloud providers are competing not only on infrastructure but also on the breadth of AI services they can offer.

Despite these shifts, the relationship between Microsoft and OpenAI remains deeply interconnected. Microsoft continues to rely on OpenAI’s technology for key products, while OpenAI depends on Microsoft’s infrastructure and enterprise reach. The evolving partnership suggests that future competition in AI will be shaped by overlapping collaborations rather than exclusive agreements.

AI & Machine Learning, Enterprise Tech, News
Exit mobile version