Bill Gates Withdraws From India AI Summit Amid Renewed Scrutiny Over Epstein Ties

Bill Gates canceled his keynote appearance at India’s AI Impact Summit hours before speaking, as renewed scrutiny over past ties to Jeffrey Epstein intensified following U.S. Justice Department disclosures.

By Samantha Reed Edited by Maria Konash Published: Updated:
Bill Gates exited India’s AI Impact Summit under pressure over Epstein email scrutiny. Photo: Wesley Tingey / Unsplash

Bill Gates withdrew from India’s AI Impact Summit just hours before his scheduled keynote on Thursday, as scrutiny over his past relationship with Jeffrey Epstein intensified following the release of emails by the U.S. Justice Department.

The abrupt cancellation dealt a setback to a flagship event aimed at positioning India as a leading voice in global AI governance. Organizers said Gates would not deliver his address “to ensure the focus remains on the AI Summit’s key priorities.” The philanthropic foundation he co-founded in 2000 did not respond to questions about whether the withdrawal was directly linked to renewed scrutiny.

Gates has previously said his interactions with Epstein were limited to philanthropy-related discussions and described meeting him as a mistake. The controversy resurfaced after DOJ-released emails showed communication between Epstein and staff at the Gates Foundation.

The six-day summit, held in New Delhi, has nevertheless secured more than $200 billion in investment pledges for AI infrastructure. Among the largest commitments was a $110 billion plan announced by Reliance Industries. India’s Tata Group also signed a partnership agreement with OpenAI.

High-Profile Absences and AI Commitments

Gates had been expected to join a lineup of global technology leaders including Sundar Pichai, Sam Altman, and Dario Amodei. His withdrawal followed an earlier cancellation by Jensen Huang, adding to challenges for a summit billed as the first major AI forum hosted in the Global South.

In his keynote address, Narendra Modi called for greater vigilance around children’s safety on AI platforms. Standing alongside French President Emmanuel Macron and senior AI executives, Modi emphasized the need for family-guided safeguards in digital systems.

The summit also marked the launch of the New Delhi Frontier AI Commitments, a set of voluntary principles adopted by leading AI companies to promote responsible development of advanced models. Altman told attendees that ChatGPT now has 100 million weekly users in India, underscoring the country’s scale as a growth market for AI services.

Organizational Challenges

Despite major investment announcements, the summit has faced criticism over logistics and planning. Exhibition halls were unexpectedly closed to the public on Thursday, frustrating companies that had set up booths. Police road closures to accommodate VIP movements led to traffic disruptions across the capital.

In one incident, Galgotias University was asked to vacate its stall after a staff member presented a commercially available robotic dog manufactured in China as its own innovation, prompting public backlash.

Opposition parties criticized the government for mismanagement, while attendees voiced frustration over delays and restricted access. The government later apologized for the inconvenience.

Even with high-profile absences and operational setbacks, the scale of pledged investment highlights India’s ambitions to expand AI infrastructure and influence global policy debates around frontier technologies.

Broadcom Expands Google, Anthropic AI Chip Partnerships

Broadcom is expanding its role in AI infrastructure through new chip and compute deals with Google and Anthropic. The move reflects accelerating demand for large-scale AI capacity.

By Olivia Grant Edited by Maria Konash Published:
Broadcom deepens AI push with Google chips and Anthropic deal, signaling surging infrastructure demand. Image: Laura Ockel / Unsplash

Broadcom is expanding its footprint in artificial intelligence infrastructure through new agreements with Google and Anthropic, underscoring the growing demand for compute power behind generative AI systems. The company said it will develop future versions of Google’s AI chips while also supporting a major expansion of Anthropic’s access to computing capacity. The updates, disclosed in a regulatory filing, pushed Broadcom shares up about 3% in extended trading.

At the center of the announcement is Broadcom’s continued work on Google’s tensor processing units, or TPUs, custom chips designed to train and run AI models at scale. While the companies have collaborated for years, the latest agreement signals a deeper alignment as competition intensifies among chipmakers and cloud providers. Custom silicon is becoming increasingly important as AI companies look for alternatives to general-purpose graphics processing units.

Broadcom is also scaling its relationship with Anthropic, one of the fastest-growing AI startups. The expanded deal will provide the company with access to roughly 3.5 gigawatts of compute capacity, primarily powered by Google’s TPU infrastructure. That marks a sharp increase from earlier deployments. Broadcom CEO Hock Tan recently said the company had already begun supplying around 1 gigawatt of compute to Anthropic, with demand expected to exceed 3 gigawatts by 2027.

Anthropic’s rapid growth helps explain the scale of the investment. The company said its annualized revenue has surpassed $30 billion, up from about $9 billion at the end of last year. It now counts more than 1,000 enterprise customers spending over $1 million annually, a figure that has doubled in just two months. Its Claude chatbot also saw a surge in popularity earlier this year, briefly becoming the most downloaded free app in Apple’s U.S. App Store.

The broader opportunity for Broadcom could be substantial. Analysts at Mizuho estimate the company may generate $21 billion in AI-related revenue from Anthropic in 2026, potentially doubling to $42 billion in 2027. While Broadcom did not disclose financial terms, the projections highlight how central large AI customers are becoming to semiconductor revenue growth.

A Shift Beyond GPUs

The deals also reflect a wider shift in how AI infrastructure is built. For years, companies like Anthropic and OpenAI have relied heavily on Nvidia GPUs accessed through cloud providers such as Amazon, Google, and Microsoft. That model is now evolving.

Broadcom is working with multiple AI developers, including OpenAI, on custom silicon tailored to specific workloads. At the same time, OpenAI has committed to using large volumes of AMD GPUs, signaling a diversification of suppliers. This mix of custom chips and alternative hardware suggests the AI ecosystem is moving toward more specialized and distributed infrastructure strategies.

Scaling the AI Backbone

The expansion of compute capacity into the gigawatt range highlights the industrial scale of modern AI. Training and deploying advanced models now requires vast energy, data center space, and specialized hardware. Much of Anthropic’s new infrastructure is expected to be located in the United States, reflecting both capacity needs and strategic considerations around data and supply chains.

For Broadcom, the partnerships reinforce its transition from a traditional semiconductor supplier into a key enabler of AI platforms. For the industry, they illustrate how the race to build and control AI infrastructure is becoming as critical as the development of the models themselves.

AI & Machine Learning, Cloud & Infrastructure, News

SpaceX IPO May Crowd Out 2026 Listings

SpaceX’s planned $75 billion IPO is expected to dominate investor attention, potentially sidelining other companies aiming to go public in 2026. Analysts warn the deal could tighten already fragile IPO market conditions.

By Samantha Reed Edited by Maria Konash Published:
SpaceX’s $75B IPO could soak up demand and delay other listings, reshaping 2026 pipelines. Image: NASA / Unsplash

Elon Musk’s SpaceX is nearing a potential $75 billion initial public offering that could become one of the largest in history, but analysts warn the deal may disrupt the broader IPO market in 2026. Industry experts say the scale and visibility of the offering could absorb a significant share of investor capital and attention, making it harder for other companies to successfully go public in the same window. With U.S. IPO activity already lagging, the timing of the SpaceX debut is emerging as a critical factor for companies waiting to list after years of subdued market conditions.

According to data from Renaissance Capital, 35 IPOs have priced so far this year, marking a 37.5% decline compared with the same period last year. Market participants say a mega listing like SpaceX could intensify this slowdown. Large IPOs often act as focal points for institutional investors, drawing capital away from smaller or less prominent deals. Analysts compare the situation to Facebook’s 2012 IPO, which similarly dominated market attention and affected concurrent listings.

The SpaceX offering is expected to take place around June, a traditionally strong period for IPO activity before a seasonal summer slowdown. However, companies may delay their listings to avoid competing directly for visibility and capital. Bankers are already advising major clients to consider alternative timelines. At the same time, smaller IPOs could see limited upside from increased retail investor enthusiasm, benefiting from broader attention on public listings if SpaceX performs well.

Beyond SpaceX, the IPO pipeline includes other high-profile candidates. Reports indicate that AI firms OpenAI and Anthropic are considering public debuts later in the year. The clustering of such large offerings could further concentrate investor demand, raising the threshold for successful listings. Analysts at PitchBook suggest that the cumulative impact of these mega IPOs could push a fully open IPO window into 2027, delaying recovery for the broader market.

Why This Matters

The SpaceX IPO highlights how a single large deal can reshape market dynamics. For the IPO ecosystem, it underscores the importance of timing and investor allocation. Companies planning to go public may face tougher competition for capital, forcing delays or revised valuations.

For businesses, especially late-stage startups, the development could extend reliance on private funding or alternative financing routes. For investors, the concentration of capital into a few marquee deals may limit diversification opportunities in the short term. Retail investors, however, may see increased engagement with IPOs overall, particularly if high-profile listings perform strongly.

Context

The global IPO market has struggled to regain momentum after a prolonged downturn driven by rising interest rates and economic uncertainty. While 2026 was expected to mark a recovery, ongoing disruptions have complicated the outlook. These include geopolitical tensions such as the war in Iran, rising oil prices, concerns around private credit markets, and rapid AI-driven changes affecting legacy technology firms.

Against this backdrop, the emergence of multiple large-scale IPO candidates signals renewed activity but also introduces new challenges. Mega listings like SpaceX, along with potential offerings from leading AI companies like OpenAI and Anthropic, are likely to dominate market attention. This concentration could reshape not only the timing of IPOs but also investor behavior, setting the tone for public markets over the next several years.

AI & Machine Learning, News, Startups & Investment

Cursor 3 Launches Unified Workspace for AI Coding Agents

Cursor has launched Cursor 3, a redesigned workspace focused on managing AI coding agents. The update introduces multi-agent workflows, improved collaboration, and a unified development interface.

By Daniel Mercer Edited by Maria Konash Published:
Cursor 3 launches unified AI coding workspace with multi-agent support and cloud integration. Image: Cursor

Cursor has introduced Cursor 3, a major update to its AI-powered development platform, reflecting a broader shift toward agent-driven software engineering. The release positions AI agents as central participants in the coding process, moving beyond traditional manual workflows.

The company describes software development as entering a new phase where autonomous agents handle a growing share of code generation and iteration. However, current workflows remain fragmented, with developers managing multiple tools, terminals, and agent interactions. Cursor 3 aims to address this by consolidating these processes into a single interface.

The new workspace is built around agent coordination rather than file-level editing. It allows developers to operate at a higher level of abstraction, focusing on outcomes while still retaining the ability to inspect and modify underlying code when needed.

A key feature is the ability to run multiple agents in parallel. These agents can operate across different environments, including local machines and cloud infrastructure, while remaining visible within a centralized interface. This reduces the need to track separate sessions or switch between tools.

Unified Interface and Workflow Integration

Cursor 3 introduces a multi-repository layout that enables teams and agents to collaborate across projects within the same workspace. The platform also supports seamless transitions between local and cloud environments, allowing developers to move agent tasks depending on performance needs or availability.

For example, long-running processes can be shifted to the cloud, while local environments can be used for testing and iteration. This flexibility is designed to improve efficiency and reduce interruptions during development workflows.

The update also includes improvements to code review and deployment processes. A redesigned diff view simplifies reviewing changes, while integrated tools allow users to stage, commit, and manage pull requests directly within the interface.

Cursor continues to build on its foundation as a standalone development environment, originally derived from a fork of Visual Studio Code. Cursor 3 expands this approach with additional features such as an integrated browser for testing applications, support for plugins through its marketplace, and enhanced navigation tools for exploring codebases.

The launch highlights a broader trend in developer tools, where AI systems are evolving from assistants into active collaborators. As models improve and agent capabilities expand, platforms like Cursor are focusing on orchestration and usability, aiming to make complex multi-agent workflows more accessible.

Cursor said it will continue investing in both its agent infrastructure and traditional IDE features, as it works toward a future where software systems are increasingly built and maintained through coordinated AI-driven processes.

AI & Machine Learning, News

Chinese Chipmakers See Record Revenue on AI Demand

Chinese chipmakers report record revenue driven by AI demand, memory shortages, and U.S. export curbs boosting domestic semiconductor growth.

By Olivia Grant Edited by Maria Konash Published:
Chinese chip firms post record revenue as AI demand and export curbs drive local adoption. Image: Igor Omilaev / Unsplash

Chinese semiconductor companies are reporting record revenues as demand for artificial intelligence infrastructure accelerates and U.S. export restrictions reshape global supply chains. The combined effect has boosted domestic chip production and strengthened Beijing’s push for technological self-sufficiency.

Semiconductor Manufacturing International Co. (SMIC), China’s largest chipmaker, reported a 16% year-over-year revenue increase to $9.3 billion in 2025, with projections exceeding $11 billion in 2026. Hua Hong also posted record quarterly revenue, reflecting strong demand across multiple chip segments.

The growth is being driven in part by domestic technology firms investing heavily in AI infrastructure. With limited access to advanced U.S. chips due to export controls, Chinese companies are increasingly turning to local suppliers to meet computing needs.

U.S. restrictions, particularly on high-performance GPUs and advanced semiconductor equipment, have accelerated China’s efforts to develop its own chip ecosystem. Analysts describe the restrictions as a catalyst that has intensified demand for domestically produced components across industries including AI, electric vehicles, and data centers.

Companies such as Moore Threads are benefiting from this shift, with the firm projecting more than 200% annual revenue growth as it works to position itself as a local alternative to global GPU leaders.

Memory Shortages and Technology Gaps Persist

In addition to logic chips, Chinese memory manufacturers are seeing significant gains. ChangXin Memory Technologies (CXMT) reported a sharp rise in revenue, driven by global shortages and rising demand for memory used in AI systems and consumer electronics.

High-bandwidth memory, a critical component for AI workloads, remains dominated by global players such as Samsung, SK Hynix, and Micron. However, export restrictions have created opportunities for domestic firms like CXMT to supply the Chinese market, even with older-generation technologies.

Despite strong revenue growth, Chinese semiconductor firms continue to lag behind global leaders in advanced manufacturing capabilities. Companies such as SMIC and Hua Hong are unable to produce cutting-edge chips at scale due to limited access to advanced lithography equipment from suppliers like ASML.

Efforts to build a fully domestic semiconductor supply chain are ongoing but face significant technical and financial challenges. China is attempting to replicate large portions of the global chip ecosystem, a process expected to take years.

While current growth is supported by import substitution and strong domestic demand, analysts warn of potential overcapacity in mature-node chips. Sustained progress will depend on whether Chinese firms can advance into higher-value segments, including next-generation memory and advanced logic chips, which are critical for long-term competitiveness in AI infrastructure.

AI & Machine Learning, Cloud & Infrastructure, News

OpenAI’s TBPN Deal Raises Questions Amid AI Expansion

OpenAI’s acquisition of tech media platform TBPN highlights an unconventional M&A strategy as the company prepares for a potential IPO and faces rising competition in AI.

By Samantha Reed Edited by Maria Konash Published:
OpenAI acquires TBPN amid IPO push, expanding beyond core AI as competition intensifies. Image: Christian Wiediger / Unsplash

OpenAI’s acquisition of Technology Business Programming Network (TBPN), a live tech media platform, is drawing attention as the company expands beyond its core artificial intelligence products. The move comes more than 10 months after OpenAI’s $6.4 billion acquisition of Jony Ive’s device startup, underscoring an increasingly diverse and difficult-to-define M&A strategy.

The TBPN deal, financial terms undisclosed, adds a media asset to OpenAI’s growing portfolio at a time when the company is under pressure to justify its valuation and spending. With billions invested in infrastructure and ongoing operating losses, OpenAI is balancing rapid expansion with increasing investor scrutiny ahead of a potential IPO.

TBPN, founded in 2024, has gained traction within the tech ecosystem through its daily live programming and high-profile guests. While relatively small in scale, the platform has built influence among founders, investors, and developers, making it a strategic channel for industry engagement.

OpenAI leadership has framed the acquisition as part of a broader effort to shape conversations around AI. The company has emphasized the importance of creating a space for constructive dialogue about the societal and economic impact of the technology. TBPN will operate with editorial independence, though it will be integrated into OpenAI’s strategy organization.

Analysts note that the acquisition may serve as a communications and positioning tool rather than a direct revenue driver. As competition intensifies, maintaining visibility and narrative control is becoming increasingly important for AI companies.

M&A Activity Intensifies Amid Competitive Pressure

The TBPN acquisition comes as OpenAI faces growing competition from companies such as Google, Anthropic, and Elon Musk’s xAI, which has been acquired by SpaceX in February. At the same time, rivals are advancing toward public markets, increasing pressure on OpenAI to demonstrate sustainable growth and strategic clarity.

OpenAI has made several acquisitions and hires across sectors in recent months, including software, cybersecurity, and healthcare startups. The company has also brought in experienced leadership to guide corporate development, signaling continued interest in strategic deals.

Despite this activity, questions remain about how these acquisitions fit into a cohesive long-term strategy. Industry analysts suggest that OpenAI may be experimenting with different approaches to expand its ecosystem, from hardware and developer tools to media and community platforms.

The company’s ability to absorb and integrate these assets will be closely watched, particularly as it prepares for a potential IPO. Media-related acquisitions, while potentially valuable for influence and reach, have historically carried higher execution risk compared to core technology investments.

Still, OpenAI’s recent $122 billion funding round provides significant financial flexibility, allowing it to pursue smaller, high-visibility bets like TBPN. As the AI market evolves rapidly, the company appears to be testing multiple pathways to maintain relevance and differentiation.

The outcome of this strategy will likely depend on whether these investments translate into stronger user engagement, clearer positioning, and sustained competitive advantage in an increasingly crowded AI landscape.

AI & Machine Learning, News, Startups & Investment
Exit mobile version