ElevenLabs Surpasses $500M ARR, Adds BlackRock, Nvidia Investors

ElevenLabs has crossed $500 million in annual recurring revenue and added major investors including BlackRock and Nvidia. The funding reflects rising enterprise demand for AI voice agents.

By Samantha Reed Edited by Maria Konash Published:
ElevenLabs hits $500M ARR, signaling rapid growth in AI voice agents backed by major investors. Image: ElevenLabs

ElevenLabs has surpassed $500 million in annual recurring revenue and announced new investors as part of an expanded Series D funding round. The latest backers include BlackRock, Nvidia through its NVentures arm, and Santander, alongside creative figures such as Jamie Foxx and Eva Longoria. The announcement follows strong revenue growth, with the company reporting $350 million ARR at the end of 2025 and adding $150 million more in the first four months of 2026.

The growth is tied to increasing enterprise adoption of AI-powered voice agents across functions such as customer support, sales, hiring, and marketing. ElevenLabs said large organizations are deploying its tools to enable natural, human-like interactions at scale, often across multiple languages and channels. Institutional investors including Wellington Management and Schroders joined the round, signaling confidence in conversational AI as a core business infrastructure.

Several enterprise customers are also participating as investors. Companies such as Salesforce, Deutsche Telekom, and KPN are already using ElevenLabs’ platform for applications ranging from advertising and product demos to real-time translation and AI-driven customer support. Deutsche Telekom, for example, has deployed voice agents within its network to assist users during live calls, highlighting how telecom providers are integrating AI deeper into core services.

The company is also expanding its reach beyond enterprise and into the creative economy. More than 30 high-profile figures, including filmmaker Hwang Dong-hyuk and actor Matthew McConaughey, are backing the platform. ElevenLabs said these partnerships reflect growing interest among creators in tools that allow them to scale and localize their voices for new audiences and formats.

Enterprise Demand Fuels Voice AI Expansion

The rapid rise in revenue underscores how voice-based AI is becoming a key interface for businesses. Unlike text-based systems, voice interactions require low latency and high realism, making them technically demanding but valuable for customer-facing roles. ElevenLabs’ focus on generating natural-sounding speech positions it within a segment where quality directly affects user trust and engagement.

For enterprises, the appeal lies in automation without sacrificing personalization. AI voice agents can handle high volumes of interactions while maintaining consistent tone and language, which is particularly useful for global companies. The company’s expansion into multilingual capabilities and real-time applications suggests that voice is evolving into a primary communication layer across digital services.

The involvement of financial institutions and strategic investors also indicates that conversational AI is increasingly viewed as long-term infrastructure rather than an experimental technology. This could accelerate adoption across sectors such as banking, telecom, and retail.

From Voice Models to Full-Stack AI Communication

ElevenLabs began by developing high-quality AI voice models, but it is now positioning itself as a broader communication platform. The company plans to integrate audio with image and video generation tools, allowing businesses and creators to produce complete marketing and media assets within a single system.

On the enterprise side, the roadmap includes expanding AI agents beyond voice into chat, email, and other channels. This reflects a shift toward unified AI systems that manage customer interactions across touchpoints. The company is also investing in international expansion, building local teams to tailor deployments to regional markets.

Alongside its funding round, ElevenLabs completed a $100 million tender offer, its second within a year, providing liquidity to employees as the company scales. It currently has more than 500 employees across 50 countries.

AI & Machine Learning, News, Startups & Investment

Anthropic Launches Finance AI Agents and Microsoft 365 Integration

Anthropic has released ten AI agent templates for financial services and expanded Claude into Microsoft 365 apps. The update targets faster enterprise deployment of AI in finance workflows.

By Daniel Mercer Edited by Maria Konash Published:

Anthropic has introduced a suite of AI tools tailored for financial services, including ten ready-to-deploy agent templates and deeper integration of its Claude assistant into Microsoft 365 applications. The release is designed to help financial institutions automate complex workflows such as building pitchbooks, screening compliance documents, and managing month-end close processes. The company said teams can deploy these agents within days, rather than months, as enterprise demand for applied AI continues to grow.

The new AI agent templates function as pre-built systems combining domain-specific instructions, secure data connectors, and subagents that handle specialized tasks. These templates cover a wide range of financial activities, including research, valuation checks, financial modeling, and compliance screening. For example, a pitch-building agent can generate comparable company analysis in Excel, draft presentation materials in PowerPoint, and prepare communications for clients. The agents can be deployed either as plugins within Claude Cowork and Claude Code or as autonomous systems through Claude Managed Agents.

Anthropic also expanded Claude’s functionality across Microsoft 365 tools, including Excel, PowerPoint, and Word, with Outlook integration expected soon. The add-ins allow Claude to operate directly within these applications, carrying context between them so users do not need to restate information. This enables workflows where financial models created in Excel can automatically feed into presentation decks or written reports without manual transfer.

The update is supported by Claude Opus 4.7, which Anthropic said leads industry benchmarks for financial task performance, including a top score on the Vals AI Finance Agent benchmark. The company is also expanding its data ecosystem through new connectors to providers such as S&P Capital IQ, MSCI, and Morningstar, along with an MCP app from Moody’s that embeds proprietary credit data directly into Claude.

Automation Moves Into Core Financial Workflows

The release reflects a shift from general-purpose AI tools toward specialized systems built for industry-specific tasks. In financial services, where accuracy, compliance, and auditability are critical, pre-configured agents can reduce the time required to deploy AI while maintaining governance controls.

For firms, the ability to automate tasks like financial modeling, reconciliation, and compliance screening could improve efficiency across front, middle, and back office operations. At the same time, Anthropic emphasizes that users remain in control, reviewing and approving outputs before they are used in decision-making or client communications.

The integration with Microsoft 365 also signals a strategy focused on embedding AI directly into existing workflows rather than requiring separate platforms. This approach could lower adoption barriers, as employees can use AI tools within familiar software environments.

News

OpenAI and Anthropic Eye Acquisitions to Scale Enterprise AI Deployment

OpenAI and Anthropic are pursuing acquisitions of consulting and engineering firms to accelerate enterprise AI deployment. The moves highlight growing demand for skilled implementation services.

By Samantha Reed Edited by Maria Konash Published: Updated:
OpenAI and Anthropic target AI deployment firms in acquisitions, scaling enterprise adoption amid talent shortages. Image: SIMON LEE / Unsplash

OpenAI and Anthropic are moving beyond model development and into services, with both companies exploring acquisitions of consulting and engineering firms that help businesses deploy artificial intelligence. According to the Reuters report, OpenAI’s newly formed joint venture is already in advanced talks on three deals, while Anthropic is pursuing a similar strategy through its own investment vehicle. The shift reflects a growing need to bridge the gap between powerful AI systems and real-world enterprise implementation.

OpenAI is raising roughly $4 billion from 19 investors, including TPG, Bain Capital, and Brookfield Asset Management, for a new entity called The Deployment Company. The venture is expected to be formally announced soon and will focus largely on acquiring firms that provide engineering and consulting services. Meanwhile, Anthropic is reportedly raising about $1.5 billion from backers including Blackstone, Hellman & Friedman, and Goldman Sachs to fund similar efforts.

The goal is to bring in hundreds of engineers and consultants who can customize AI models for enterprise clients. While large language models and generative AI tools have advanced rapidly, companies still require hands-on expertise to integrate them into existing systems, workflows, and data environments. This includes adapting models to specific use cases and maintaining them as business needs evolve.

The approach mirrors strategies used by Palantir Technologies, which embeds engineers directly within customer organizations to implement and refine its software. By acquiring service providers, OpenAI and Anthropic could consolidate a fragmented market of smaller firms while building in-house deployment capabilities.

Closing The Implementation Gap

The expansion into services highlights a key constraint in enterprise AI adoption: the shortage of skilled professionals who can operationalize AI systems. Despite the perception of AI as a scalable software business, successful deployment often depends on labor-intensive work carried out by specialists.

For businesses, this means that adopting AI is not simply a matter of licensing software. It requires ongoing collaboration with engineers who can tailor models to specific needs and ensure reliability in production environments. By acquiring consulting firms, OpenAI and Anthropic aim to reduce this bottleneck and accelerate adoption across industries.

The move could also reshape competition in the AI sector. Companies that combine advanced models with strong deployment capabilities may gain an advantage, particularly in enterprise markets where implementation complexity is high.

From Models To Managed Services

The strategy marks a broader shift in how AI companies position themselves. Rather than focusing solely on developing more powerful models, they are increasingly building end-to-end platforms that include deployment, customization, and support.

This evolution aligns with growing enterprise demand for integrated solutions rather than standalone tools. It also suggests a consolidation trend, as larger AI players acquire smaller service providers to expand their capabilities and customer reach.

Meta Expands AI Age Checks and Teen Protections Across Platforms

Meta is expanding its AI-driven age detection systems and Teen Account protections across Instagram and Facebook to better identify underage users and enforce safety measures. The move broadens geographic coverage and adds new visual analysis tools.

By Samantha Reed Edited by Maria Konash Published:
Meta expands AI age detection on Instagram and Facebook, adding visual analysis and stronger teen protections. Image: Meta

Meta has introduced a new set of updates aimed at improving how it identifies and protects younger users across its platforms, including Instagram and Facebook. The company is expanding its use of artificial intelligence to detect underage accounts and enforce age-based protections, while also rolling out Teen Account safeguards to more regions. The changes come as regulatory scrutiny and public concern over youth safety online continue to increase globally.

A key part of the update is the use of more advanced AI systems to identify users under 13, who are not permitted on Meta’s platforms. The company said it is now using visual analysis alongside text-based signals to assess whether an account belongs to a minor. This includes scanning posts, captions, and images for contextual clues such as school references or birthday celebrations. If an account is flagged, it may be deactivated unless the user verifies their age through official checks.

Meta emphasized that the visual analysis does not rely on facial recognition. Instead, it evaluates general characteristics such as physical features and contextual elements in photos and videos to estimate age. The company is also deploying AI tools to assist with reviewing reports of underage users, aiming to improve both speed and accuracy compared with human moderation alone.

Alongside enforcement, Meta is expanding its system that automatically places suspected teens into restricted Teen Accounts. These accounts include built-in protections such as limits on who can contact users and what content they can see. After initial rollouts in the United States, Canada, the United Kingdom, and Australia, the company is extending these measures to 27 European Union countries and Brazil on Instagram. Facebook will also begin using the system in the US, with further expansion planned in the UK and EU.

Tighter Controls and Broader Coverage

The updates reflect a broader shift toward proactive age verification as platforms face increased pressure to protect younger users. By combining AI detection with automatic safeguards, Meta is reducing reliance on self-reported age, which has historically been unreliable.

For teens, this means more accounts will be placed into restricted environments by default, even if users attempt to bypass safeguards. Parents may also see more prompts and tools designed to encourage age transparency. For the wider industry, the rollout signals a move toward automated enforcement at scale, which competitors may need to match as expectations rise.

Meta’s call for app store-level age verification also points to a potential structural change in how age assurance is handled. A centralized system could simplify compliance and create more consistent protections across apps, though it would require cooperation from platform operators.

Policy Pressure and Platform Evolution

Meta’s latest measures build on years of investment in youth safety features, including Teen Accounts across Instagram, Facebook, and Messenger. These systems automatically apply stricter privacy settings and content limitations for users under 18.

The company has increasingly turned to AI to address moderation challenges, including detecting harmful content and identifying policy violations. Age detection remains one of the most complex problems, as users can misrepresent their identity and behavior varies widely across regions.

Similar approaches are emerging across the industry. OpenAI introduced age prediction tools within ChatGPT to adjust safety settings for younger users, while allowing adults to verify their age to access fewer restrictions. The parallel efforts highlight a growing consensus among major platforms that automated age assurance will play a central role in meeting regulatory demands and improving online safety for teens.

AI & Machine Learning, News, Regulation & Policy

Unity Rolls Out AI Game Development Assistant in Open Beta

Unity Technologies has introduced an open beta of Unity AI, an in-editor assistant designed to speed up game development through generative tools. The release signals deeper integration of AI into game creation workflows.

By Daniel Mercer Edited by Maria Konash Published: Updated:

Unity Technologies has launched the open beta of Unity AI, a new set of artificial intelligence tools embedded directly within its game development editor. The company says the assistant is designed to streamline production by helping developers generate assets, audio, and even playable scenes from simple text prompts or visual references. The move reflects growing demand for faster development cycles as studios seek to reduce repetitive tasks and accelerate iteration.

Unity AI is integrated into the Unity Editor and tailored to the specific workflows of the engine. Unlike general purpose AI tools, the system is designed to understand project structure, game logic, and the broader creative context of development. This allows it to deliver more relevant outputs, from generating environment assets to assembling interactive scenes that can be tested immediately. Developers can also connect external AI tools through an AI Gateway or integrate workflows via their preferred development environments.

The company emphasized that control remains with developers. Changes introduced by Unity AI can be reviewed, modified, or fully undone, and teams can set permissions to limit how autonomously the AI operates. Generated assets can also be tagged for easier tracking and iteration. Unity says this approach ensures that AI acts as an assistant rather than replacing creative decision making.

Unity AI is available to all developers using Unity 6 and newer versions of the engine. The rollout comes as competitors across the gaming and software industries increasingly embed generative AI into creative tools. By focusing on engine specific context, Unity aims to differentiate its offering from standalone AI platforms that may lack awareness of game development pipelines.

What It Means

The introduction of Unity AI highlights how generative AI is becoming a core component of development environments rather than an external add on. For studios, this could reduce production time and costs, particularly for smaller teams that lack dedicated resources for asset creation or prototyping. Faster iteration may also lead to more experimentation and shorter development cycles.

For the broader industry, Unity’s move adds pressure on competing engines and tool providers to integrate similar capabilities. It also raises questions about workflow changes, skill requirements, and how developers balance automation with creative control. For end users, the impact may appear in the form of more frequent game releases and increasingly diverse content.

Industry Backdrop

Unity has long been one of the most widely used game engines, particularly among indie developers and mobile studios. In recent years, the company has expanded its focus beyond core engine tools to include services for monetization, analytics, and now AI driven development.

The rise of generative AI has already influenced areas such as art creation, code generation, and design prototyping. Major technology companies and startups are racing to embed these capabilities into existing platforms. Unity AI represents a continuation of that trend, aiming to bring AI directly into the day to day workflows of game developers while maintaining compatibility with established production pipelines.

AI & Machine Learning, News

OpenAI Releases Its Version of Events in Musk Dispute

OpenAI released its version of events detailing its split with Elon Musk and the origins of their ongoing legal conflict. The company argues Musk sought control and later turned to lawsuits after leaving.

By Maria Konash Published:
OpenAI details Musk dispute history, donations, and lawsuits amid ongoing battle over control and mission. Image: AIstify team

OpenAI has released a detailed account of its dispute with Elon Musk, outlining why the billionaire left the organization and why he is now pursuing legal and public challenges against it. The company claims that as early as 2017, internal discussions acknowledged the need for a for profit structure to fund advanced AI development. According to OpenAI, Musk pushed for full control of the organization and proposed merging it with Tesla. When those terms were rejected, he exited and reportedly predicted the company had no chance of success.

OpenAI also addressed financial contributions made by Musk, stating he donated $38 million to its nonprofit arm. The company says those funds were used as intended to support its mission. It alleges that Musk is now attempting to reframe that donation as an investment in court, seeking equity in the organization. The dispute has escalated into legal action, with Musk targeting the nonprofit foundation that governs OpenAI.

The company further claims that Musk has engaged in sustained public criticism and legal pressure over several years. It cites reports suggesting a coordinated effort involving intermediaries and even references alleged cooperation with Mark Zuckerberg to undermine OpenAI’s mission. Musk has not confirmed these claims publicly. OpenAI positions these actions as part of a broader attempt to disrupt a competing AI entity, particularly as Musk has since launched his own AI venture, xAI, which is tied to his broader business ecosystem including SpaceX.

OpenAI emphasized that its governance remains rooted in a nonprofit structure, even as it operates a public benefit corporation to scale its technologies. The organization says its foundation is now valued at over 180 billion dollars and has secured a 25 billion dollar commitment aimed at advancing research in areas such as life sciences and disease treatment. It expects to invest at least 1 billion dollars this year alone to accelerate scientific discovery using AI tools.

The Stakes

The dispute underscores growing friction in the AI industry over control, funding models, and long term governance. As AI development becomes more capital intensive, hybrid structures combining nonprofit oversight with commercial operations are becoming more common. Legal challenges like Musk’s could influence how these models evolve, particularly around donor rights and ownership claims.

For businesses and developers, the outcome may shape competitive dynamics between major AI players. Musk’s parallel efforts through xAI highlight increasing fragmentation in the market, with multiple well funded entities pursuing similar goals. For users, including the more than 900 million weekly users of ChatGPT, the dispute is unlikely to affect short term access but could impact future product direction and safety priorities.

Market Context

OpenAI was founded as a nonprofit with the stated goal of developing artificial general intelligence for the benefit of humanity. As costs grew, it introduced a capped profit model to attract investment while maintaining oversight. Musk, an early supporter, departed before this transition was fully implemented.

Since then, the AI sector has seen rapid expansion, with major technology companies and startups investing heavily in large language models and infrastructure. Governance and safety have become central concerns, particularly as AI systems gain broader adoption. OpenAI’s emphasis on nonprofit control and safety measures, including youth protections and research initiatives, reflects ongoing efforts to balance innovation with accountability.

Exit mobile version