Nvidia CEO Proposes AI Tokens as Engineer Compensation

Nvidia CEO Jensen Huang proposed paying engineers with AI tokens to boost productivity through AI agents. The idea reflects a shift toward AI-driven workflows in tech hiring.

By Samantha Reed Edited by Maria Konash Published:
Nvidia eyes AI tokens in engineer pay, signaling a shift to agent-driven productivity. Image: Google DeepMind / Unsplash

Nvidia CEO Jensen Huang has proposed a new compensation model for engineers that includes AI “tokens” as part of their pay, reflecting a broader shift toward AI-driven productivity in the workplace.

Speaking at Nvidia’s annual GPU Technology Conference, Huang suggested that engineers could receive token budgets alongside their base salaries. These tokens, which represent units of compute used to run AI models and agents, would allow employees to deploy AI systems to automate tasks and enhance output.

Huang said engineers could earn several hundred thousand dollars in base pay, with an additional allocation of tokens valued at a significant portion of that salary. The tokens would effectively function as a productivity resource, enabling workers to scale their output by leveraging AI tools.

AI Agents Reshape Workflows

The proposal is tied to Huang’s vision of a future workplace where engineers oversee large networks of AI agents capable of executing complex, multi-step tasks. In this model, human workers act as supervisors, directing digital systems that handle coding, analysis, and other functions.

Huang has previously described a future in which Nvidia’s workforce includes far more AI agents than human employees. These systems would rely on software infrastructure, increasing demand for computing resources and development tools.

The concept aligns with a growing trend in the technology sector, where companies are integrating AI agents into everyday workflows. These systems can perform tasks such as writing code, analyzing data, and generating reports with minimal human input.

Industry observers note that this shift is changing how software is developed. Instead of writing code line by line, engineers increasingly describe desired outcomes in natural language, with AI systems generating and executing the underlying logic.

Labor Market Impact and Talent Shift

The rise of AI agents has intensified debate about the future of work. Some analysts warn that automation could displace a significant share of white-collar roles, particularly those involving repetitive or entry-level tasks.

Estimates suggest AI could automate up to a quarter of work hours in the United States, with potential productivity gains of around 15%. At the same time, companies face a “talent paradox,” where demand for AI-skilled workers is rising even as automation reduces the need for certain roles.

Entry-level positions are seen as particularly vulnerable, as AI systems increasingly handle foundational tasks that once served as training grounds for new employees. This could widen skill gaps and complicate workforce development.

Despite these concerns, economists point out that technological shifts historically create new categories of jobs, even as they eliminate others. Emerging roles related to AI management, oversight, and integration are expected to grow.

AI & Machine Learning, Enterprise Tech, News

Anthropic Partners With Blackstone, Goldman to Launch $1.5B AI Venture

Anthropic and major investment firms are forming a $1.5 billion AI services venture to deploy Claude across portfolio companies. The move targets mid-sized firms lacking in-house AI expertise.

By Maria Konash Published:
Anthropic teams with Blackstone and Goldman Sachs on $1.5B venture to expand Claude across enterprises. Image: Anthropic

Anthropic has teamed up with Blackstone, Goldman Sachs, and Hellman & Friedman to launch a new AI services company backed by roughly $1.5 billion. The joint venture will focus on deploying Anthropic’s Claude AI system within portfolio companies owned by these firms. The initiative reflects growing demand for hands-on AI integration, particularly among mid-sized organizations that lack internal technical resources. The announcement comes as enterprises accelerate efforts to embed AI into core operations rather than experiment at the margins.

Each of the primary partners is contributing capital to the venture. Anthropic, Blackstone, and Hellman & Friedman are investing $300 million each, while Goldman Sachs is committing $150 million. The company is also supported by a broader group of asset managers, including General Atlantic, Apollo Global Management, GIC, Leonard Green & Partners, and Sequoia Capital. The structure is designed to combine capital with operational expertise, allowing the new entity to work directly with management teams inside portfolio companies.

The venture will provide end-to-end AI implementation services, including training staff, integrating Claude into workflows, and redesigning internal processes. Anthropic’s applied AI engineers will collaborate with the firm’s own teams to identify high-impact use cases and build customized solutions. Early examples include healthcare organizations, where AI tools could automate documentation, coding, and compliance tasks, freeing up staff time for patient care. Engagements will typically begin with small teams embedded within client organizations and expand over time based on results.

Strategic Impact

This initiative signals a shift in enterprise AI adoption from experimentation to execution. Rather than selling software licenses alone, AI companies and investors are moving toward integrated service models that combine technology, capital, and operational change. For private equity firms, embedding AI across portfolio companies could improve efficiency and returns. For businesses, especially mid-sized firms, the model lowers the barrier to adopting advanced AI systems by providing both tools and expertise. End users may see faster service delivery and reduced administrative burdens as AI becomes embedded in everyday workflows.

Industry Context

The launch builds on Anthropic’s broader push to expand its Claude Partner Network, which includes consulting and systems integration firms such as Accenture, Deloitte, and PwC. These partnerships have focused primarily on large enterprises, leaving a gap in the mid-market segment that this new venture aims to address.

The move also reflects intensifying competition among AI providers to secure enterprise footholds, as companies seek not just models but practical deployment strategies.

The expansion also comes as Anthropic explores a significantly larger funding round amid strong investor demand and rapid revenue growth. Reports indicate the company could seek up to $50 billion in new capital at a valuation approaching $900 billion, underscoring the scale of interest in AI infrastructure and services as adoption accelerates.

Japan’s Toto, Best Known for Toilets, Sees Shares Surge on AI Chip Boom

Toto Ltd. shares jumped after strong semiconductor growth tied to AI demand. The shift highlights how legacy manufacturers are benefiting from the global AI infrastructure boom.

By Olivia Grant Edited by Maria Konash Published:
Toto shares surge as AI chip demand boosts semiconductor unit, offsetting core business slowdown. Image: John Cameron / Unsplash

Toto Ltd., best known globally for its high-tech toilets, saw its shares rise 18% to a five-year high following a strong earnings report driven by semiconductor demand. The Japanese company reported that its chip-related business is expanding rapidly, fueled by the global surge in artificial intelligence infrastructure. This comes as demand for memory chips, essential for AI systems, continues to accelerate. The company is now doubling down on this segment with new investments aimed at scaling production and research.

Toto announced plans to invest about $190 million to expand its semiconductor component operations and strengthen research and development. The company produces electrostatic chucks, specialized components used to hold silicon wafers in place during the manufacturing of NAND flash memory. Toto is currently the second-largest producer of these components globally. Sales in its semiconductor division grew 34% year over year, and the unit now contributes more than half of the company’s operating profit, reflecting a significant shift in its business mix.

The pivot is not entirely new but represents an acceleration of an existing strategy. While Toto’s toilet division remains widely recognized for its advanced features, including automated cleaning and deodorizing systems, that segment is facing headwinds. Supply chain disruptions tied to material shortages, particularly adhesives and plastics linked to the Middle East energy crisis, have forced the company to halt new orders temporarily. This contrast has made its semiconductor business increasingly central to overall performance.

What It Means

Toto’s results underscore how the AI boom is reshaping supply chains beyond traditional tech companies. Demand for memory chips, especially NAND flash used in data centers, is creating opportunities for component suppliers that were previously niche players. For businesses, this signals a broader shift where industrial and manufacturing firms can gain relevance in the AI economy. For investors, it highlights how companies with indirect exposure to AI infrastructure may see outsized gains. End users may not interact directly with these components, but they underpin faster and more capable AI systems.

Industry Backdrop

The surge in AI investment has triggered a global race to expand semiconductor capacity, particularly in memory and processing components. Companies across the supply chain, from chipmakers to materials providers, are scaling operations to meet demand. Toto’s move mirrors a wider trend of diversification among traditional manufacturers seeking growth in high-tech sectors. At the same time, supply chain fragility remains a concern, as seen in Toto’s core business challenges. The company’s dual exposure to consumer products and semiconductor infrastructure reflects the evolving intersection of legacy industries and emerging technologies.

AI & Machine Learning, Cloud & Infrastructure, News

Oracle Layoffs Reveal ‘Train Then Replace’ AI Strategy

Former Oracle employees say they were asked to help train internal AI systems before being laid off, as the company shifts resources toward data centers and AI infrastructure.

By Samantha Reed Edited by Maria Konash Published:
Oracle layoffs hit thousands as it pivots to AI, with reports workers trained systems before being cut. Image: BoliviaInteligente / Unsplash

Oracle has laid off up to 30,000 employees over the past month, according to reports, as the company accelerates its pivot toward artificial intelligence and large-scale data center infrastructure. Former workers told TIME that some teams were first instructed to document workflows and use internal AI tools, only to be dismissed shortly afterward.

The layoffs come as Oracle increases investment in AI infrastructure, including its role alongside OpenAI and Nvidia in the Stargate project, a large-scale initiative aimed at expanding computing capacity. Analysts previously estimated that cutting 20,000–30,000 jobs could generate $8–10 billion in additional free cash flow, which could be redirected toward data center construction.

Workers Describe “Train Then Replace” Dynamic

Several former employees said they felt they were effectively helping build systems that would later reduce the need for their roles. Teams were encouraged, or required, to use internal AI tools and document processes that could be automated.

In some cases, employees reported that these tools did not improve productivity and instead created additional work, such as debugging AI-generated code or correcting inaccurate outputs. Others described increased workloads, with expectations rising even as headcount declined.

Financial and Personal Impact

Beyond job loss, many workers also lost significant compensation tied to unvested stock. One example cited involved a long-time employee losing approximately $300,000 in restricted stock units after termination.

For employees on work visas, the layoffs introduced additional risk, including limited time to secure new employment or leave the country. Some former staff also reported losing healthcare coverage or facing reduced severance compared to industry peers.

Strategic Shift Toward AI Infrastructure

Oracle’s leadership has been explicit about prioritizing AI. Chairman Larry Ellison has emphasized that companies building AI infrastructure are likely to dominate future markets. The company is reportedly committing tens of billions of dollars to expand data center capacity, even as it faces the prospect of negative cash flow through the end of the decade.

The layoffs reflect a broader trend across the tech industry, where companies are reallocating resources from traditional roles toward AI development and infrastructure. Similar dynamics are visible at Meta, which recently announced plans to cut around 8,000 jobs while ramping AI-related spending to as much as $135 billion, underscoring how the push for AI-driven productivity is reshaping both investment priorities and the workforce across the industry.

AI & Machine Learning, Cloud & Infrastructure, News

China’s AI ‘Digital Ex’ Trend Blurs Lines Between Memory and Privacy

A growing trend in China allows users to create AI replicas of former partners using personal data. The practice is raising concerns about privacy, emotional dependency, and relationships.

By Samantha Reed Edited by Maria Konash Published:
China’s “digital ex” AI trend creates virtual replicas from photos and data, raising privacy and emotional concerns. Image: Kelly Sikkema / Unsplash

A new trend in China is seeing users create AI-generated replicas of former romantic partners by uploading personal data such as chat logs, photos, and social media content. These systems generate virtual models that mimic speech patterns, personality traits, and communication styles, allowing users to interact with a digital version of their ex. The phenomenon has gained traction among younger users seeking ways to process breakups and unresolved emotions.

The technology builds on tools originally designed for workplace productivity, such as systems that convert communication data into reusable AI “skills.” Developers adapted these tools to personal relationships, enabling users to simulate conversations and interactions based on past experiences. Some platforms allow further customization by adding memories, behavioral details, and shared experiences, making the replicas more realistic over time. In some cases, users integrate these AI models into messaging apps to continue conversations in a familiar format.

Advocates say the approach can provide emotional relief, helping users reflect on past relationships or find closure. Some users report that interacting with a digital version of a former partner allows them to express unresolved feelings or reassess the relationship more objectively. Others see it as a way to gradually detach from emotional dependence by confronting idealized memories.

Emotional and Social Risks

Critics warn that the trend could create new forms of emotional dependency. Interacting with AI replicas may blur the boundary between past and present relationships, potentially complicating users’ ability to move forward. Some experts also raise concerns about “emotional infidelity,” particularly if individuals engage with digital versions of former partners while in new relationships.

There are also concerns about how realistic simulations may influence perception and memory. By selectively reinforcing certain traits or interactions, AI replicas could reshape how users remember past relationships, potentially distorting emotional outcomes.

Privacy and Legal Concerns

The use of personal data to create digital replicas has prompted legal and ethical questions. Uploading chat histories or social media content without consent may violate data protection laws, according to legal experts. The issue is particularly sensitive when the recreated individual has not agreed to their likeness or communication style being used.

The trend reflects a broader shift in how AI is being applied to personal and emotional contexts. As similar technologies are used to recreate deceased individuals or simulate relationships, questions around consent, identity, and psychological impact are becoming more prominent.

AI & Machine Learning, News

Anthropic Eyes $50B Raise as Valuation Nears $900B

Anthropic is considering a major funding round amid strong investor demand and rapid revenue growth. The potential raise could value the company at up to $900 billion.

By Samantha Reed Edited by Maria Konash Published:
Anthropic eyes up to $50B raise at $850B-$900B valuation as revenue nears $40B. Image: Anthropic

Anthropic is facing intense investor demand as it considers a new funding round that could raise between $40 billion and $50 billion at a valuation of $850 billion to $900 billion. Multiple preemptive offers have been made to the company, according to sources familiar with the matter, reflecting strong interest ahead of a potential initial public offering. A final decision on whether to proceed with the round is expected at a board meeting in May.

The surge in investor interest is driven by Anthropic’s rapid revenue growth. The company recently reported an annual revenue run rate exceeding $30 billion, up from about $9 billion at the end of 2025, with some estimates placing the current figure closer to $40 billion. Much of this growth is attributed to demand for its AI coding products, including Claude Code and Cowork, which are gaining traction among enterprise users.

Anthropic’s last funding round in February valued the company at $380 billion. If the new round proceeds at the reported terms, it would more than double that valuation and bring Anthropic in line with or ahead of competitors such as OpenAI, which recently raised capital at an $852 billion valuation. Investor appetite appears to exceed supply, with some institutions reportedly seeking multibillion-dollar allocations without securing meetings with company leadership.

Investor Momentum

The scale of interest highlights the growing competition among investors to gain exposure to leading AI companies. Anthropic’s positioning in areas such as coding assistance and enterprise AI tools has made it a key target for capital allocation. The company’s ability to generate substantial revenue early in its lifecycle has further strengthened its appeal.

For investors, the potential round represents an opportunity to participate in one of the largest private funding events in the technology sector. However, the size of the valuation also raises questions about sustainability and long-term returns, particularly as the company approaches a possible public listing.

Market Context

The development comes amid a broader surge in AI investment, with major players raising large amounts of capital to fund infrastructure, research, and product expansion. Companies are competing to scale their models and capture enterprise demand across industries such as finance, healthcare, and life sciences.

Anthropic’s rapid growth and funding momentum reflect the accelerating pace of the AI market. As companies prepare for public offerings, investor expectations are increasingly tied to revenue growth and the ability to translate technical advances into commercial success.

AI & Machine Learning, News, Startups & Investment
Exit mobile version