Adobe Teams Up with Microsoft, OpenAI, Anthropic, Nvidia to Launch AI Agents for Enterprise

Adobe has launched CX Enterprise, a new AI platform integrating agents across major tech ecosystems. The move aims to streamline customer experience workflows at scale.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Adobe Teams Up with Microsoft, OpenAI, Anthropic, Nvidia to Launch AI Agents for Enterprise
Adobe unveils CX Enterprise, integrating AI agents across platforms to transform customer experience workflows. Image: Adobe

Adobe has announced a major expansion of its AI ecosystem with the launch of CX Enterprise, a new platform designed to orchestrate agent-driven workflows across marketing, content, and customer experience operations. The announcement was made at Adobe Summit, the company’s flagship customer experience conference.

CX Enterprise introduces an end-to-end system that integrates AI agents across multiple tools and platforms, enabling businesses to manage the full customer lifecycle from a unified environment. At the center of the platform is the CX Enterprise Coworker, an AI agent designed to execute tasks based on business goals, supported by Adobe’s data, content, and customer journey infrastructure.

The initiative reflects Adobe’s push to address fragmentation in enterprise AI, where businesses often rely on disconnected tools and models. By building a more open and interoperable ecosystem, Adobe aims to help organizations deploy AI agents that can operate consistently across workflows while maintaining governance and brand control.

Deep Integrations Across AI Platforms

Adobe is expanding integrations with major technology providers, embedding its capabilities into widely used enterprise environments. Its marketing-focused AI agent is now available in tools such as Microsoft 365 Copilot and is in beta across platforms including Claude Enterprise, ChatGPT Enterprise, Gemini, and IBM watsonx.

These integrations allow teams to access Adobe’s customer experience intelligence directly within their existing workflows, reducing the need to switch between tools. The system uses first-party data from Adobe Experience Platform to deliver insights, optimize campaigns, and flag issues in real time.

Adobe is also working with Nvidia to build the CX Enterprise Coworker using Nvidia’s AI infrastructure, enabling deployment in both cloud and on-premises environments with enterprise-grade security controls.

Building an Open Agent Ecosystem

A central focus of CX Enterprise is extensibility. Adobe is connecting its platform to a wide range of partners across payments, customer data, and engagement tools. Integrations with companies such as PayPal and Stripe aim to enable seamless transaction flows within AI-driven experiences.

The company is also expanding its ecosystem for conversational AI through partnerships with firms like Algolia and Netomi, supporting more personalized and consistent customer interactions.

On the services side, major agencies including WPP and Publicis Groupe, along with system integrators such as Accenture and Deloitte, are adopting CX Enterprise to build industry-specific solutions.

From Tools to Orchestrated Experiences

Adobe’s broader strategy is to shift from standalone tools to orchestrated, multi-agent systems that manage complex workflows across the enterprise. By automating repetitive tasks and embedding AI insights into everyday processes, the company aims to improve efficiency while enabling more personalized customer experiences.

The launch underscores a wider industry trend toward “agentic” AI systems that can coordinate across platforms and execute multi-step tasks. As businesses adopt these systems, the ability to integrate across ecosystems and maintain control over data and brand identity is becoming a key differentiator.

With CX Enterprise, Adobe is positioning itself as a central platform for this new model of enterprise AI, where agents, rather than users, increasingly drive execution across marketing and customer experience operations.

Retail Traders Gain OpenAI Exposure via Robinhood Fund

Retail investors can now gain exposure to OpenAI through Robinhood’s venture fund, which has taken a $75 million stake in the AI company.

By Samantha Reed Edited by Maria Konash Published:
Retail Traders Gain OpenAI Exposure via Robinhood Fund
Robinhood fund invests $75M in OpenAI, offering retail investors indirect exposure to AI. Image: PiggyBank / Unsplash

Robinhood is giving retail traders a new way to gain exposure to OpenAI after its publicly traded Robinhood Ventures Fund I invested $75 million in the artificial intelligence firm.

The move allows everyday investors to effectively “go long” on OpenAI through the fund, which aggregates stakes in private technology companies and trades on the New York Stock Exchange. It marks one of the most direct avenues yet for retail participants to access the fast-growing AI sector, where many leading firms remain privately held.

Robinhood said the investment is among the largest in the fund’s portfolio and reflects strong demand for exposure to frontier AI companies.

Opening the Door to Private AI Giants

Access to companies like OpenAI has traditionally been limited to institutional investors and venture capital firms. However, as top AI startups delay IPOs and raise massive private funding rounds, retail investors have increasingly looked for alternative entry points.

Robinhood’s fund structure offers one such pathway, bundling private company stakes into a tradable vehicle. Alongside OpenAI, the fund includes positions in firms such as Databricks, Revolut, and Oura.

The $75 million investment is relatively small compared to OpenAI’s overall valuation but significant in terms of expanding retail participation in private markets.

From Conflict to Collaboration

The investment comes after tensions between the two companies last year. OpenAI and its CEO Sam Altman publicly criticized Robinhood’s earlier attempt to offer tokenized shares tied to private companies, arguing that such instruments do not represent actual equity ownership.

Despite that dispute, Robinhood’s latest move suggests a shift toward more conventional investment structures, aligning with regulatory expectations while still broadening access.

Rising Demand for AI Exposure

The development reflects a broader trend in financial markets, where demand for AI-related investments continues to surge. Companies such as Anthropic and xAI have attracted significant capital, reinforcing investor interest in the sector.

At the same time, the growing gap between public and private markets has made early-stage access more valuable. By offering indirect exposure through a listed fund, Robinhood is positioning itself at the center of that shift.

For retail investors, the opportunity comes with trade-offs, including less transparency and liquidity compared to direct equity ownership. Still, the ability to participate in the growth of companies like OpenAI marks a notable evolution in how access to high-growth technology is distributed.

AI & Machine Learning, News, Startups & Investment

Nvidia Partners Google Cloud to Launch New AI Infrastructure and Agent Tools”

Nvidia and Google Cloud unveiled new AI infrastructure and agentic AI capabilities at Google Cloud Next, targeting large-scale enterprise and industrial deployments.

By Maria Konash Published:
Nvidia Partners Google Cloud to Launch New AI Infrastructure and Agent Tools”
Nvidia and Google Cloud expand AI partnership with new infrastructure, Blackwell GPUs, and agentic tools for enterprise. Image: Google Cloud

NVIDIA and Google Cloud have expanded their long-standing partnership with a new set of AI infrastructure and platform updates unveiled at the Google Cloud Next conference in Las Vegas. The announcements focus on scaling “AI factories” and enabling enterprise deployment of agentic and physical AI systems.

The collaboration introduces new infrastructure, including A5X bare-metal instances powered by NVIDIA’s next-generation Vera Rubin architecture, alongside expanded support for Gemini models running on NVIDIA Blackwell GPUs. The companies aim to provide a fully integrated stack, from chips and networking to software and cloud services, designed for high-performance AI workloads.

The updates reflect growing demand for infrastructure capable of supporting advanced AI systems that can operate autonomously and interact with real-world environments.

Next-Generation AI Infrastructure

At the core of the announcement is the A5X platform, built on NVIDIA’s Vera Rubin NVL72 systems. Google said the new infrastructure delivers up to 10 times lower inference cost per token and 10 times higher throughput compared to previous generations.

The system is designed to scale to massive clusters, supporting up to 80,000 GPUs in a single site and nearly one million GPUs across multiple locations. This enables enterprises to train and deploy large-scale AI models, including multimodal and reasoning systems.

Google Cloud’s broader Blackwell portfolio also includes a range of virtual machine configurations, allowing customers to scale from fractional GPU usage to full rack-scale deployments depending on workload requirements.

Secure and Distributed AI Deployment

The partnership also emphasizes security and flexibility. Gemini models can now run on Google Distributed Cloud with NVIDIA Blackwell GPUs, allowing organizations to deploy AI closer to sensitive data environments.

Confidential computing capabilities ensure that prompts, training data, and model outputs remain encrypted, even from infrastructure operators. This is particularly relevant for regulated industries such as finance, healthcare, and government.

New confidential virtual machines extend these protections to public cloud environments, offering secure access to high-performance AI resources without compromising data privacy.

Advancing Agentic and Physical AI

NVIDIA and Google Cloud are also targeting the next wave of AI applications, including autonomous agents and physical systems such as robots and digital twins. The platform supports a wide range of models, from Google’s Gemini family to NVIDIA’s open Nemotron models, enabling developers to build systems that can reason, plan, and act.

Integration with tools like NVIDIA Omniverse and Isaac Sim allows developers to simulate real-world environments and train robotics systems before deployment. This opens up use cases in manufacturing, logistics, and industrial automation.

Companies including OpenAI, Salesforce, and Snap are already using the infrastructure for tasks ranging from large-scale inference to data processing and simulation.

From Experimentation to Production

The expanded platform is designed to help organizations move AI projects from experimentation to production. Startups and enterprises are using the combined infrastructure to build applications in areas such as software development, drug discovery, and real-time analytics.

With more than 90,000 developers already participating in the joint ecosystem, the partnership highlights the scale at which AI infrastructure is evolving. As demand for compute and advanced models continues to grow, collaborations like this are shaping the foundation for the next generation of AI systems.

SpaceX May Acquire Cursor for $60B Later This Year

SpaceX has secured rights to acquire AI coding startup Cursor for up to $60 billion, deepening its push into AI alongside xAI ahead of a potential IPO.

By Samantha Reed Edited by Maria Konash Published:
SpaceX May Acquire Cursor for $60B Later This Year
SpaceX secures option to buy Cursor for $60B, signaling major AI push into coding tools. Image: SpaceX

SpaceX has struck a deal with AI coding startup Cursor that gives it the option to acquire the company for up to $60 billion later this year. Alternatively, SpaceX can pay $10 billion tied to ongoing collaboration between the two firms, according to a statement posted on X.

The agreement highlights SpaceX’s growing ambitions in artificial intelligence, following Elon Musk’s earlier move to merge the company with his AI venture xAI in a deal valued at $1.25 trillion. The combined entity is expected to pursue a public listing, potentially becoming one of the largest IPOs in technology history.

Cursor CEO Michael Truell said the partnership will focus on scaling the company’s AI systems, including its “Composer” model, as part of a broader effort to build advanced coding and knowledge work tools.

Strategic Push Into AI Development Tools

Cursor develops AI tools designed to assist software engineers with tasks such as testing code, tracking changes, and documenting workflows through logs, screenshots, and video. The company has gained traction as part of a growing wave of startups building AI-powered coding agents.

The partnership with SpaceX signals an effort to compete more directly with offerings from OpenAI and Anthropic, which provide similar tools through products like Codex and Claude.

SpaceX said the collaboration aims to create “the world’s best coding and knowledge work AI,” suggesting a broader ambition beyond software development into general productivity applications.

Deal Comes Amid Fundraising and Industry Competition

The announcement comes as Cursor is reportedly in talks to raise $2 billion at a valuation exceeding $50 billion. Investors expected to participate include Andreessen Horowitz, Nvidia, and Thrive Capital, all of which have backed AI companies across the sector.

The structure of the SpaceX deal gives the company flexibility, allowing it to deepen collaboration before committing to a full acquisition. It also positions SpaceX to secure a strategic asset in a rapidly evolving market where AI coding tools are becoming central to software development.

Broader Implications for Musk’s AI Strategy

The move reflects Musk’s broader effort to build a vertically integrated AI ecosystem spanning infrastructure, models, and applications. His previous acquisition of X (formerly Twitter) through xAI and ongoing hiring from Cursor indicate a strategy focused on consolidating talent and capabilities.

The timing is notable, coming just days before a high-profile legal case involving Musk and Sam Altman, further underscoring tensions between leading players in the AI industry.

If completed, the Cursor deal would rank among the largest acquisitions in the AI sector, reinforcing the growing importance of coding agents and developer tools as a battleground for next-generation software platforms.

AI & Machine Learning, News, Startups & Investment

Recursive Superintelligence Raises $500M to Build Self-Improving AI

AI startup Recursive Superintelligence has raised $500 million from Nvidia and GV to pursue self-improving AI systems, despite having no public product.

By Laura Bennett Edited by Maria Konash Published:

A new artificial intelligence startup, Recursive Superintelligence, has raised $500 million in fresh funding, reaching a $4 billion valuation despite not yet releasing a public product. The round was backed by Nvidia and GV, underscoring continued investor appetite for next-generation AI systems.

The company was founded by former researchers from Google DeepMind and OpenAI, and is focused on developing AI models capable of recursive self-improvement. The concept aims to move beyond current approaches that rely heavily on human-labeled data and manual fine-tuning.

Instead, Recursive Superintelligence is building systems that can design, evaluate, and refine their own architectures with minimal human input, potentially accelerating the pace of AI development.

Toward Self-Teaching AI Systems

At the core of the company’s strategy is the idea that human involvement has become a bottleneck in AI progress. As models grow more complex, the need for human supervision slows iteration cycles.

Recursive’s approach seeks to create a “closed-loop” system where AI models continuously improve themselves. This includes generating hypotheses, testing them, and integrating successful changes into future versions without external intervention.

If successful, this could significantly reduce development timelines. Instead of requiring months or years between major model upgrades, new iterations could emerge in hours or days.

The company is also exploring deeper integration between software and hardware, working closely with Nvidia to optimize AI systems alongside the chips they run on. This could enable more efficient training and faster experimentation cycles.

A High-Risk, High-Reward Bet

The funding comes at a time of intense competition and consolidation in the AI sector. While some startups face pressure to demonstrate clear revenue models, companies focused on foundational AI technologies continue to attract large investments.

Recent funding activity across the industry, including major rounds for infrastructure and model developers, suggests that investors are prioritizing long-term breakthroughs over short-term returns.

However, the valuation has raised questions. Critics warn that the company’s $4 billion price tag, achieved without a commercial product, reflects broader concerns about a potential AI investment bubble.

Building Toward First Autonomous Training Run

Recursive Superintelligence plans to use the funding to recruit top AI talent and build the large-scale compute infrastructure required for its first autonomous training cycle, referred to internally as a “Level 1” run. This milestone is expected later this year.

The outcome of that effort will be closely watched. Demonstrating meaningful self-improvement without human intervention would represent a major shift in how AI systems are developed.

For now, the company embodies a growing trend in the industry: betting that the next leap in AI will come not just from bigger models, but from systems that can redesign themselves.

AI & Machine Learning, News, Research & Innovation

OpenAI Launches ‘Chronicle’ Screen Memory Feature in Codex

OpenAI has introduced Chronicle, a new Codex feature that tracks screen activity and builds context automatically. The tool raises privacy and security concerns.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI Launches ‘Chronicle’ Screen Memory Feature in Codex
OpenAI launches Chronicle for Codex, adding screen-aware memory and automation while raising privacy concerns. Image: OpenAI

OpenAI has introduced Chronicle, a new experimental feature for its Codex app that allows the AI to observe a user’s screen activity and build contextual memory automatically. The feature, now available in preview for ChatGPT Pro users on macOS, represents a significant step toward more autonomous and context-aware AI assistants.

Chronicle operates in the background by periodically capturing screenshots, analyzing them, and converting them into structured text summaries. These summaries are stored locally and used to provide context for future interactions, allowing Codex to understand ongoing tasks without requiring users to repeatedly explain their work.

OpenAI president Greg Brockman described the feature as giving the assistant the ability to “see and remember” recent activity, enabling a more seamless and responsive workflow.

Turning Activity Into Context

The core goal of Chronicle is to reduce friction in AI-assisted work. By tracking what users are doing across applications, Codex can infer project context, tools in use, and recent actions, making interactions more efficient.

This approach aligns with a broader trend in AI development toward persistent memory and agent-like behavior, where systems can operate continuously and build knowledge over time. Instead of responding to isolated prompts, Codex can maintain continuity across sessions and tasks.

However, this deeper integration also introduces technical and operational trade-offs, particularly around data handling and system performance.

Privacy and Security Concerns

Chronicle’s architecture has raised concerns about user privacy and security. Screenshots captured by the system are sent to OpenAI servers for processing and are deleted within six hours. However, the generated summaries are stored locally as unencrypted Markdown files, potentially accessible to other applications.

OpenAI has acknowledged the risks, noting that the feature could increase exposure to prompt injection attacks and accidental leakage of sensitive information visible on screen. The company advises users to disable Chronicle when working with confidential data.

The feature may also increase usage costs, as continuous background processing consumes more request capacity within subscription limits.

Echoes of Industry Challenges

The launch draws comparisons to Microsoft’s earlier attempt to introduce a similar feature, Recall, in Windows. That tool also captured user activity for AI processing but faced strong backlash over privacy concerns, leading Microsoft to delay its rollout and make it optional.

Chronicle reflects the same tension facing the industry: balancing the benefits of highly contextual AI systems with the risks of continuous data capture. As AI tools become more integrated into daily workflows, managing that balance will be critical for user trust and adoption.

The feature signals OpenAI’s push toward more proactive, agent-like assistants, but its long-term success may depend on how effectively the company addresses privacy and security challenges.

AI & Machine Learning, News