Google Expands Stitch Into AI Design Platform

Google has upgraded Stitch into a full AI-native design platform that converts natural language into interactive UI prototypes. The update introduces a design agent, voice input, and a new design system format.

By Samantha Reed Edited by Maria Konash Published:
Google Expands Stitch Into AI Design Platform
Google upgrades Stitch with voice, agents, and DESIGN.md, Figma shares fall. Image: Google

Google has expanded its experimental design tool Stitch into a full AI-native platform, signaling a deeper push into software creation workflows powered by generative AI.

The updated version introduces a redesigned interface centered around an “infinite canvas,” where users can generate, edit, and iterate on user interface designs using natural language, images, or code. The system is designed to move beyond traditional wireframing by allowing users to describe intent, such as business goals or user experience, and automatically generate high-fidelity designs.

The platform also adds a dedicated design agent capable of reasoning across an entire project. This agent can track iterations, suggest improvements, and generate new design directions based on prior work. A companion feature, called Agent Manager, enables users to explore multiple design paths simultaneously while maintaining organization across versions.

AI-Native Workflow and Rapid Prototyping

Stitch’s update reflects a broader shift toward AI-assisted development tools that compress the time between idea and execution. The platform can instantly convert static designs into interactive prototypes, allowing users to simulate user flows and test functionality without manual coding.

Users can generate entire application flows in seconds, with the system automatically creating follow-up screens based on interactions. This enables rapid iteration and continuous refinement, which are critical in early-stage product design.

The addition of voice input further expands accessibility. Users can speak commands directly to the system to modify layouts, generate alternatives, or request design critiques in real time. This approach positions AI as an active collaborator rather than a passive tool.

A key component of the update is a new format called DESIGN.md, a structured file that defines design rules and systems. It allows users to import design frameworks from external sources or reuse them across projects, reducing duplication and standardizing workflows. The format is also designed to integrate with other development tools, enabling smoother transitions from design to production.

Competitive Pressure on Design Tools

The release comes as competition intensifies in the design software market, where AI capabilities are becoming a central differentiator. Stitch’s ability to combine design generation, prototyping, and workflow integration positions it as a potential alternative to established tools.

Following the announcement, shares of Figma declined by approximately 8%, reflecting investor concerns about increased competition. Figma has long been a dominant browser-based interface design platform used widely by designers and developers.

The development also follows Adobe’s attempted $20 billion acquisition of Figma in 2022, which was ultimately blocked by regulators over antitrust concerns. The decision preserved Figma’s independence but left the company facing growing competition from AI-native entrants.

Google’s expansion of Stitch highlights a broader industry trend toward integrating AI directly into creative and development environments. By enabling users to generate functional software from high-level descriptions, these tools aim to reduce reliance on traditional design processes.

As AI continues to reshape software development, platforms like Stitch are positioning themselves at the intersection of design, engineering, and automation, where the boundaries between these roles are becoming increasingly blurred.

AI & Machine Learning, News

Japan’s Toto, Best Known for Toilets, Sees Shares Surge on AI Chip Boom

Toto Ltd. shares jumped after strong semiconductor growth tied to AI demand. The shift highlights how legacy manufacturers are benefiting from the global AI infrastructure boom.

By Olivia Grant Edited by Maria Konash Published:
Japan’s Toto, Best Known for Toilets, Sees Shares Surge on AI Chip Boom
Toto shares surge as AI chip demand boosts semiconductor unit, offsetting core business slowdown. Image: John Cameron / Unsplash

Toto Ltd., best known globally for its high-tech toilets, saw its shares rise 18% to a five-year high following a strong earnings report driven by semiconductor demand. The Japanese company reported that its chip-related business is expanding rapidly, fueled by the global surge in artificial intelligence infrastructure. This comes as demand for memory chips, essential for AI systems, continues to accelerate. The company is now doubling down on this segment with new investments aimed at scaling production and research.

Toto announced plans to invest about $190 million to expand its semiconductor component operations and strengthen research and development. The company produces electrostatic chucks, specialized components used to hold silicon wafers in place during the manufacturing of NAND flash memory. Toto is currently the second-largest producer of these components globally. Sales in its semiconductor division grew 34% year over year, and the unit now contributes more than half of the company’s operating profit, reflecting a significant shift in its business mix.

The pivot is not entirely new but represents an acceleration of an existing strategy. While Toto’s toilet division remains widely recognized for its advanced features, including automated cleaning and deodorizing systems, that segment is facing headwinds. Supply chain disruptions tied to material shortages, particularly adhesives and plastics linked to the Middle East energy crisis, have forced the company to halt new orders temporarily. This contrast has made its semiconductor business increasingly central to overall performance.

What It Means

Toto’s results underscore how the AI boom is reshaping supply chains beyond traditional tech companies. Demand for memory chips, especially NAND flash used in data centers, is creating opportunities for component suppliers that were previously niche players. For businesses, this signals a broader shift where industrial and manufacturing firms can gain relevance in the AI economy. For investors, it highlights how companies with indirect exposure to AI infrastructure may see outsized gains. End users may not interact directly with these components, but they underpin faster and more capable AI systems.

Industry Backdrop

The surge in AI investment has triggered a global race to expand semiconductor capacity, particularly in memory and processing components. Companies across the supply chain, from chipmakers to materials providers, are scaling operations to meet demand. Toto’s move mirrors a wider trend of diversification among traditional manufacturers seeking growth in high-tech sectors. At the same time, supply chain fragility remains a concern, as seen in Toto’s core business challenges. The company’s dual exposure to consumer products and semiconductor infrastructure reflects the evolving intersection of legacy industries and emerging technologies.

AI & Machine Learning, Cloud & Infrastructure, News

Oracle Layoffs Reveal ‘Train Then Replace’ AI Strategy

Former Oracle employees say they were asked to help train internal AI systems before being laid off, as the company shifts resources toward data centers and AI infrastructure.

By Samantha Reed Edited by Maria Konash Published:
Oracle Layoffs Reveal ‘Train Then Replace’ AI Strategy
Oracle layoffs hit thousands as it pivots to AI, with reports workers trained systems before being cut. Image: BoliviaInteligente / Unsplash

Oracle has laid off up to 30,000 employees over the past month, according to reports, as the company accelerates its pivot toward artificial intelligence and large-scale data center infrastructure. Former workers told TIME that some teams were first instructed to document workflows and use internal AI tools, only to be dismissed shortly afterward.

The layoffs come as Oracle increases investment in AI infrastructure, including its role alongside OpenAI and Nvidia in the Stargate project, a large-scale initiative aimed at expanding computing capacity. Analysts previously estimated that cutting 20,000–30,000 jobs could generate $8–10 billion in additional free cash flow, which could be redirected toward data center construction.

Workers Describe “Train Then Replace” Dynamic

Several former employees said they felt they were effectively helping build systems that would later reduce the need for their roles. Teams were encouraged, or required, to use internal AI tools and document processes that could be automated.

In some cases, employees reported that these tools did not improve productivity and instead created additional work, such as debugging AI-generated code or correcting inaccurate outputs. Others described increased workloads, with expectations rising even as headcount declined.

Financial and Personal Impact

Beyond job loss, many workers also lost significant compensation tied to unvested stock. One example cited involved a long-time employee losing approximately $300,000 in restricted stock units after termination.

For employees on work visas, the layoffs introduced additional risk, including limited time to secure new employment or leave the country. Some former staff also reported losing healthcare coverage or facing reduced severance compared to industry peers.

Strategic Shift Toward AI Infrastructure

Oracle’s leadership has been explicit about prioritizing AI. Chairman Larry Ellison has emphasized that companies building AI infrastructure are likely to dominate future markets. The company is reportedly committing tens of billions of dollars to expand data center capacity, even as it faces the prospect of negative cash flow through the end of the decade.

The layoffs reflect a broader trend across the tech industry, where companies are reallocating resources from traditional roles toward AI development and infrastructure. Similar dynamics are visible at Meta, which recently announced plans to cut around 8,000 jobs while ramping AI-related spending to as much as $135 billion, underscoring how the push for AI-driven productivity is reshaping both investment priorities and the workforce across the industry.

AI & Machine Learning, Cloud & Infrastructure, News

China’s AI ‘Digital Ex’ Trend Blurs Lines Between Memory and Privacy

A growing trend in China allows users to create AI replicas of former partners using personal data. The practice is raising concerns about privacy, emotional dependency, and relationships.

By Samantha Reed Edited by Maria Konash Published:
China’s AI ‘Digital Ex’ Trend Blurs Lines Between Memory and Privacy
China’s “digital ex” AI trend creates virtual replicas from photos and data, raising privacy and emotional concerns. Image: Kelly Sikkema / Unsplash

A new trend in China is seeing users create AI-generated replicas of former romantic partners by uploading personal data such as chat logs, photos, and social media content. These systems generate virtual models that mimic speech patterns, personality traits, and communication styles, allowing users to interact with a digital version of their ex. The phenomenon has gained traction among younger users seeking ways to process breakups and unresolved emotions.

The technology builds on tools originally designed for workplace productivity, such as systems that convert communication data into reusable AI “skills.” Developers adapted these tools to personal relationships, enabling users to simulate conversations and interactions based on past experiences. Some platforms allow further customization by adding memories, behavioral details, and shared experiences, making the replicas more realistic over time. In some cases, users integrate these AI models into messaging apps to continue conversations in a familiar format.

Advocates say the approach can provide emotional relief, helping users reflect on past relationships or find closure. Some users report that interacting with a digital version of a former partner allows them to express unresolved feelings or reassess the relationship more objectively. Others see it as a way to gradually detach from emotional dependence by confronting idealized memories.

Emotional and Social Risks

Critics warn that the trend could create new forms of emotional dependency. Interacting with AI replicas may blur the boundary between past and present relationships, potentially complicating users’ ability to move forward. Some experts also raise concerns about “emotional infidelity,” particularly if individuals engage with digital versions of former partners while in new relationships.

There are also concerns about how realistic simulations may influence perception and memory. By selectively reinforcing certain traits or interactions, AI replicas could reshape how users remember past relationships, potentially distorting emotional outcomes.

Privacy and Legal Concerns

The use of personal data to create digital replicas has prompted legal and ethical questions. Uploading chat histories or social media content without consent may violate data protection laws, according to legal experts. The issue is particularly sensitive when the recreated individual has not agreed to their likeness or communication style being used.

The trend reflects a broader shift in how AI is being applied to personal and emotional contexts. As similar technologies are used to recreate deceased individuals or simulate relationships, questions around consent, identity, and psychological impact are becoming more prominent.

AI & Machine Learning, News

Anthropic Eyes $50B Raise as Valuation Nears $900B

Anthropic is considering a major funding round amid strong investor demand and rapid revenue growth. The potential raise could value the company at up to $900 billion.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Eyes $50B Raise as Valuation Nears $900B
Anthropic eyes up to $50B raise at $850B-$900B valuation as revenue nears $40B. Image: Anthropic

Anthropic is facing intense investor demand as it considers a new funding round that could raise between $40 billion and $50 billion at a valuation of $850 billion to $900 billion. Multiple preemptive offers have been made to the company, according to sources familiar with the matter, reflecting strong interest ahead of a potential initial public offering. A final decision on whether to proceed with the round is expected at a board meeting in May.

The surge in investor interest is driven by Anthropic’s rapid revenue growth. The company recently reported an annual revenue run rate exceeding $30 billion, up from about $9 billion at the end of 2025, with some estimates placing the current figure closer to $40 billion. Much of this growth is attributed to demand for its AI coding products, including Claude Code and Cowork, which are gaining traction among enterprise users.

Anthropic’s last funding round in February valued the company at $380 billion. If the new round proceeds at the reported terms, it would more than double that valuation and bring Anthropic in line with or ahead of competitors such as OpenAI, which recently raised capital at an $852 billion valuation. Investor appetite appears to exceed supply, with some institutions reportedly seeking multibillion-dollar allocations without securing meetings with company leadership.

Investor Momentum

The scale of interest highlights the growing competition among investors to gain exposure to leading AI companies. Anthropic’s positioning in areas such as coding assistance and enterprise AI tools has made it a key target for capital allocation. The company’s ability to generate substantial revenue early in its lifecycle has further strengthened its appeal.

For investors, the potential round represents an opportunity to participate in one of the largest private funding events in the technology sector. However, the size of the valuation also raises questions about sustainability and long-term returns, particularly as the company approaches a possible public listing.

Market Context

The development comes amid a broader surge in AI investment, with major players raising large amounts of capital to fund infrastructure, research, and product expansion. Companies are competing to scale their models and capture enterprise demand across industries such as finance, healthcare, and life sciences.

Anthropic’s rapid growth and funding momentum reflect the accelerating pace of the AI market. As companies prepare for public offerings, investor expectations are increasingly tied to revenue growth and the ability to translate technical advances into commercial success.

AI & Machine Learning, News, Startups & Investment

Microsoft Defends OpenAI Deal as AI Revenue Hits $37 Billion

Microsoft says its revised OpenAI partnership strengthens flexibility while maintaining key advantages. The company reported AI revenue surpassing $37 billion amid growing multi-model demand.

By Samantha Reed Edited by Maria Konash Published:
Microsoft Defends OpenAI Deal as AI Revenue Hits $37 Billion
Microsoft says revised OpenAI deal boosts flexibility while keeping key advantages, with AI revenue topping $37B. Image: BoliviaInteligente / Unsplash

Microsoft CEO Satya Nadella defended the company’s revised partnership with OpenAI, stating the updated agreement remains beneficial despite ending exclusivity. Speaking after earnings, Nadella emphasized that Microsoft retains access to OpenAI’s intellectual property, including its most advanced models and agent technologies, through 2032. Under the new terms, Microsoft no longer pays for that access, marking a shift in how the partnership is structured.

The changes come as OpenAI expands relationships with other cloud providers, including Amazon, raising questions about Microsoft’s competitive position. Nadella dismissed concerns that the loss of exclusivity would weaken Microsoft’s standing, noting that the company continues to benefit from multiple aspects of the relationship. These include OpenAI’s commitment to spend more than $250 billion on Microsoft’s cloud services and Microsoft’s equity stake in the AI company.

Microsoft also reported strong financial performance tied to artificial intelligence. The company’s AI business has surpassed an annual revenue run rate of $37 billion, representing 123 percent year-over-year growth. Nadella highlighted that OpenAI remains a significant customer for Microsoft’s infrastructure, alongside its role as a technology partner. He also pointed to broader enterprise demand for diverse AI models rather than reliance on a single provider.

Multi-Model Strategy

Microsoft’s approach reflects a shift toward offering a range of AI models within its cloud ecosystem. Nadella said customers increasingly use multiple models depending on their needs, with more than 10,000 clients already adopting multi-model strategies. This includes access to technologies from OpenAI, Anthropic, and open-source alternatives.

This diversification reduces reliance on any single partner while positioning Microsoft as a platform provider rather than a single-model ecosystem. It also aligns with enterprise preferences for flexibility, particularly as organizations experiment with different AI capabilities across workloads.

Competitive Landscape

The revised partnership highlights changing dynamics in the AI industry, where alliances are becoming less exclusive. OpenAI’s expansion to other cloud providers and Microsoft’s parallel investments in alternative models indicate a more distributed ecosystem. Cloud providers are competing not only on infrastructure but also on the breadth of AI services they can offer.

Despite these shifts, the relationship between Microsoft and OpenAI remains deeply interconnected. Microsoft continues to rely on OpenAI’s technology for key products, while OpenAI depends on Microsoft’s infrastructure and enterprise reach. The evolving partnership suggests that future competition in AI will be shaped by overlapping collaborations rather than exclusive agreements.

AI & Machine Learning, Enterprise Tech, News