Anthropic Partners with Allianz on Enterprise AI

Anthropic has partnered with Allianz to deploy its Claude AI models and custom agents across the insurer, emphasizing responsible, transparent AI for multi-step workflows and regulatory compliance.

By Samantha Reed Edited by Maria Konash Published: Updated:
Anthropic partners with Allianz to deploy responsible AI across enterprise operations. Photo: Samuel Isaacs / Unsplash

Munich-based global insurance company Allianz announced a partnership with Anthropic to bring its large language models and AI tools into enterprise operations. Financial terms of the deal were not disclosed. The collaboration emphasizes responsible AI, transparency, and workflow automation for insurance operations.

Scope of the Partnership

The deal includes three primary initiatives. First, Anthropic’s Claude Code, an AI-powered coding tool, will be made available to all Allianz employees. Second, the companies will co-develop custom AI agents capable of executing multi-step workflows with human oversight. Third, an AI logging system will record interactions to ensure transparency and provide regulatory visibility where required.

Allianz CEO Oliver Bäte highlighted the partnership as a step toward addressing critical AI challenges in insurance. “Anthropic’s focus on safety and transparency complements our strong dedication to customer excellence and stakeholder trust,” Bäte said.

Anthropic’s Enterprise Momentum

The Allianz deal follows a string of major enterprise partnerships. In December 2025, Anthropic signed a $200 million deal to integrate its AI models with Snowflake and its customers. Earlier, it formed a multi-year collaboration with consulting firm Accenture. In October 2025, the company partnered with Deloitte to deploy its Claude chatbot across 500,000 employees and with IBM to integrate AI models into IBM products.

Anthropic currently holds 40% of the enterprise AI market and 54% of the AI coding market, according to a December survey by investor Menlo Ventures. Its market share has grown steadily over the past year, rising from 32% in July 2025.

Competitors are also expanding their enterprise AI offerings. Google launched Gemini Enterprise in October 2025, with early customers including Klarna, Figma, and Virgin Voyages. OpenAI’s ChatGPT Enterprise, launched in 2023, reportedly saw an eightfold increase in enterprise adoption over the past year. Investor surveys indicate 2026 may be the year enterprises begin to realize substantial returns from AI deployments.

Anthropic’s recent deals, including the Allianz partnership, position the company as a leading provider of AI solutions for enterprise, with a strong focus on safety, regulatory compliance, and actionable workflow automation. In related developments, Anthropic is reportedly preparing to raise $10 billion at a $350 billion valuation, with major backers including GIC, Coatue, Microsoft, and Nvidia, further supporting cloud expansion and AI model development.

AI & Machine Learning, Enterprise Tech, News

E*Trade Emerges as Key Retail Partner in SpaceX IPO

Morgan Stanley’s E*Trade is in talks to lead retail distribution for SpaceX’s record IPO, potentially sidelining platforms like Robinhood and SoFi.

By Samantha Reed Edited by Maria Konash Published:
E*Trade may lead retail share sales in SpaceX IPO, edging out Robinhood and SoFi in one of the largest listings ever. Image: E*Trade

Morgan Stanley’s E*Trade is in discussions to take a leading role in distributing shares to retail investors in SpaceX’s upcoming initial public offering, according to the report from Reuters citing people familiar with the matter.

The move could give E*Trade a significant advantage over rival platforms such as Robinhood and SoFi, which have also sought participation in the deal. SpaceX is reportedly considering limiting or excluding those firms from the retail allocation, an unusual step given their growing presence in major IPOs in recent years.

The SpaceX listing is expected to be the largest in history, with strong demand anticipated from both institutional and individual investors.

Retail Access Becomes Strategic Battleground

Retail investors are expected to play a larger role than usual in the SpaceX IPO. The company is reportedly considering allocating up to 30% of shares to individual investors, well above the typical 5% to 10% seen in most public offerings.

Morgan Stanley, a lead underwriter on the deal, is expected to channel a significant portion of that allocation through its E*Trade platform. This strategy would allow the bank to capture more of the retail order flow internally, rather than relying on third-party brokerages.

Robinhood and SoFi remain in discussions but may receive a smaller share of the offering, if any. Fidelity is also reportedly seeking a role in distributing shares through its platform.

The plans are still under discussion and could change as the IPO approaches.

Morgan Stanley’s Push Into Retail

A prominent role in the SpaceX IPO would mark a major win for E*Trade, which Morgan Stanley acquired for $13 billion in 2020. The bank has since expanded its focus on retail trading as part of a broader strategy to diversify revenue beyond traditional investment banking and wealth management.

Securing a central position in a high-profile IPO could strengthen E*Trade’s competitive standing against platforms such as Charles Schwab and Interactive Brokers, particularly as retail participation in equity markets continues to grow.

The allocation strategy also reflects a broader shift in how IPOs are structured, with increased emphasis on engaging individual investors alongside institutional buyers.

Implications for the IPO Market

The SpaceX IPO is shaping up to be a landmark event, not only because of its scale but also due to its unconventional structure and strong retail focus. The involvement of platforms like E*Trade highlights how distribution strategies are evolving in response to changing investor dynamics.

The outcome could influence how future large-scale listings are structured, particularly for technology and AI-driven companies seeking to tap both institutional capital and retail enthusiasm.

As SpaceX moves closer to going public, decisions around share allocation and distribution will play a critical role in shaping demand and setting precedents for the next wave of high-profile IPOs.

AI & Machine Learning, News, Startups & Investment

Why SpaceX’s IPO Could Be Unlike Anything Before

SpaceX is preparing a record-breaking IPO that could raise $75 billion, combining its space business with AI ambitions through xAI integration.

By Maria Konash Published:
SpaceX eyes $75B IPO at $1.75T valuation, blending space and AI via xAI. Image: SpaceX

SpaceX is preparing what could become the largest initial public offering in history, with plans to raise as much as $75 billion at a valuation of approximately $1.75 trillion.

If successful, the offering would surpass the total capital raised across all U.S. IPOs in recent years and mark the first time a company debuts publicly with a valuation exceeding $1 trillion. The scale of the deal is expected to test investor appetite for high-growth, capital-intensive technology companies, particularly those tied to artificial intelligence.

The IPO also comes at a pivotal moment for the broader AI industry, as companies such as OpenAI and Anthropic consider their own paths to public markets.

A New Hybrid of Space and AI

Unlike traditional IPO candidates, SpaceX is no longer just a space company. Elon Musk has consolidated multiple ventures, including artificial intelligence firm xAI, into a single structure ahead of the listing.

The integration reflects a broader strategy to combine space infrastructure with AI capabilities. Musk has outlined ambitions that include deploying large-scale computing capacity in orbit, potentially enabling new forms of data processing and AI services.

However, the combined entity presents challenges for investors. The newly formed structure lacks a long, unified financial history, making it difficult to evaluate performance using conventional metrics. Much of the company’s future value is tied to projects that remain in early or conceptual stages.

Unconventional IPO Structure

The offering is expected to break with several norms. Reports suggest that up to 30% of shares could be allocated to retail investors, significantly higher than typical IPO allocations.

At the same time, the company’s financial profile may include substantial losses tied to its AI operations. Like other firms developing large-scale AI systems, xAI is believed to be spending heavily on computing infrastructure and model development.

Despite this, investor demand may remain strong, driven by continued enthusiasm for AI and the potential long-term value of integrated space and computing platforms.

Implications for the AI Market

The SpaceX IPO could serve as a key benchmark for how public markets evaluate next-generation AI companies. Its outcome may influence the timing and structure of potential listings by OpenAI and Anthropic, both of which are scaling enterprise offerings and infrastructure investments ahead of possible IPOs.

The deal also highlights the growing convergence between AI and physical infrastructure, as companies invest heavily in data centers, chips, and new computing environments to support advanced models.

Ultimately, SpaceX’s public debut represents more than a fundraising event. It is a test of whether investors are willing to back a combined vision of space and AI at unprecedented scale, despite limited financial transparency and significant execution risks.

AI & Machine Learning, News, Startups & Investment

Microsoft Adds Multi-Model AI Workflows to Copilot

Microsoft has introduced multi-model capabilities in Copilot, allowing GPT and Claude to collaborate on responses to improve accuracy and reliability.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Microsoft upgrades Copilot with GPT and Claude workflows, boosting accuracy and reducing hallucinations. Image: Microsoft

Microsoft has introduced new multi-model capabilities to its Copilot assistant, enabling users to leverage multiple artificial intelligence systems within a single workflow as competition intensifies in the enterprise AI market.

The update allows Copilot’s Researcher agent to combine outputs from OpenAI’s GPT models and Anthropic’s Claude, marking a shift from relying on a single model to a collaborative AI approach. The company said the feature is designed to improve accuracy, reduce errors, and enhance overall productivity.

The move reflects a broader industry trend toward integrating multiple AI systems to balance strengths and mitigate weaknesses, particularly as businesses increasingly depend on AI for critical workflows.

AI Models Collaborate on Responses

At the core of the update is a new feature called “Critique.” In this workflow, GPT generates an initial response, which is then reviewed by Claude for quality and accuracy before being delivered to the user.

Microsoft said this layered approach helps address one of the key challenges in generative AI, known as hallucinations, where models produce incorrect or misleading information. By introducing a second model as a reviewer, the system aims to provide more reliable outputs.

The company plans to expand this capability further by making the process bi-directional, allowing GPT to also review Claude-generated responses. This would create a feedback loop between models, potentially improving performance over time.

Toward Multi-Model AI Systems

Microsoft is also launching a feature called “Model Council,” which allows users to compare outputs from different AI models side by side. This gives users greater visibility into how different systems interpret the same query and enables more informed decision-making.

The updates are part of Microsoft’s broader effort to evolve Copilot into a more advanced agentic system capable of handling complex, multi-step tasks. The company has been expanding access to Copilot Cowork, an AI agent designed to assist with collaborative workflows across enterprise environments.

The introduction of multi-model functionality highlights a shift in strategy, where AI tools are no longer tied to a single provider or architecture. Instead, platforms are increasingly designed to orchestrate multiple models to deliver better results.

Microsoft faces growing competition from other AI providers, including Google’s Gemini and Anthropic’s enterprise-focused tools. By enabling collaboration between leading models, the company is positioning Copilot as a flexible platform that can integrate capabilities from across the AI ecosystem.

The latest updates underscore the importance of reliability and interoperability in enterprise AI adoption, as organizations seek systems that can deliver consistent and trustworthy results at scale. The expansion also aligns with Microsoft’s broader push into applied AI, including the launch of Copilot Health, a secure assistant designed to analyze medical records, wearable data, and health history to deliver personalized health insights.

OpenAI Launches Codex Plugins With Slack and Notion Integrations

OpenAI has launched plugin support for Codex, enabling integrations with tools like Slack, Notion, and Gmail as it builds an ecosystem for AI-driven workflows.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI launches Codex plugins with Slack, Notion, Figma, and Gmail, expanding into a full workflow ecosystem. Image: Joshua Reddekopp / Unsplash

OpenAI has introduced plugin support for Codex, expanding its development tool into a broader platform for AI-driven workflows with integrations across popular workplace applications.

The new feature allows users to connect Codex with services including Slack, Notion, Figma, Gmail, and Google Drive. Through these integrations, Codex can access external data, automate tasks, and execute workflows that extend beyond traditional code generation.

The launch also marks the beginning of a plugin marketplace strategy, where reusable AI workflows can be distributed and adopted across teams with minimal setup.

Building an AI Workflow Ecosystem

Plugins in Codex are designed as bundled units that combine predefined workflows, integrations with external applications, and support for Model Context Protocol servers. This structure allows developers and teams to create reusable configurations tailored to specific tasks.

For example, Codex can be used to summarize Slack channels, manage documents in Google Drive, or generate and modify designs through Figma integrations. These capabilities position the tool as more than a coding assistant, enabling it to function as a general-purpose productivity layer across enterprise environments.

Previously, similar workflows required manual configuration and technical expertise. With the introduction of plugins, users can install and deploy these capabilities through a centralized directory, lowering the barrier to adoption.

The approach aligns with a broader shift in the AI sector toward agent-based systems that can execute multi-step tasks across different tools and services.

Competing in the AI Platform Race

The expansion of Codex into a plugin-enabled platform reflects increasing competition among AI providers to build extensible ecosystems. Rivals have already emphasized integrations and modular architectures, particularly for enterprise use cases.

By launching a plugin marketplace, OpenAI is aiming to create a network effect around Codex, where third-party developers can contribute tools and workflows that enhance the platform’s capabilities. This model mirrors strategies seen in cloud software and developer platforms, where ecosystems play a key role in driving adoption.

The inclusion of widely used services such as Slack, Notion, and Gmail highlights a focus on real-world productivity use cases. It also signals a move toward embedding AI more deeply into everyday workflows, rather than limiting it to isolated development tasks.

As organizations increasingly adopt AI agents to automate complex processes, tools like Codex are evolving to serve as coordination layers across software environments. The addition of plugins positions OpenAI to capture a larger share of this emerging market for AI-powered work platforms.

AI & Machine Learning, News

DDR5 Prices Fall as Google TurboQuant Reshapes AI Memory Demand

DDR5 memory prices are showing early signs of decline after Google’s TurboQuant algorithm reduced AI memory requirements, easing pressure on global DRAM supply.

By Olivia Grant Edited by Maria Konash Published:
DDR5 prices dip as TurboQuant cuts AI memory demand, easing post-OpenAI surge. Image: Liam Briese / Unsplash

DDR5 memory prices are beginning to show signs of easing after a prolonged surge driven by artificial intelligence demand, with analysts pointing to a recent breakthrough in AI efficiency as a key turning point.

The shift follows the introduction of TurboQuant, a compression algorithm unveiled by Google that significantly reduces the memory requirements of large AI models. By lowering demand for high-bandwidth memory and DRAM, the development is starting to rebalance a market that had been under intense pressure from AI infrastructure expansion.

The price movement marks a rare reversal after a sharp increase in 2025, when expectations around AI-driven demand pushed memory costs to record levels.

AI Demand Fueled Price Surge

Last year, the market reacted strongly to reports that OpenAI had signed preliminary agreements with major memory manufacturers Samsung and SK Hynix for up to 40% of global DRAM output. Although the agreements were non-binding letters of intent rather than firm purchase commitments, they were widely interpreted as indicative of massive future demand.

That perception drove DDR5 prices up by as much as 171%, with high-capacity memory kits becoming significantly more expensive. The surge also reflected broader investment in AI data centers, where memory is a critical component for training and running large-scale models.

However, some large infrastructure projects later faced delays or cancellations amid uncertainty over actual demand, contributing to growing volatility in the memory market.

TurboQuant Shifts Market Dynamics

The release of Google’s TurboQuant algorithm has introduced a new variable into the equation. The technology reduces key-value cache memory requirements by up to six times while maintaining performance, potentially lowering the amount of DRAM needed for AI workloads.

This improvement could have a direct impact on data center design, enabling operators to run large models with fewer memory modules. As a result, some supply may shift back toward consumer markets, including gaming and personal computing.

Early signs of this shift are emerging. In the United States, certain DDR5 modules, including Corsair Vengeance kits, have seen modest price declines at major retailers. Similar trends have been reported in parts of Europe, suggesting a broader stabilization in pricing.

Limited Relief for Consumers

Despite these developments, the overall memory market remains constrained. Most DRAM supply continues to be prioritized for enterprise customers, particularly hyperscalers building AI infrastructure.

Industry trackers indicate that while prices are leveling off, widespread declines have yet to materialize across all products. Analysts caution that improvements in efficiency could paradoxically drive further AI adoption, sustaining long-term demand for memory.

The broader impact of TurboQuant may depend on how quickly it is adopted and whether it leads to a net reduction in hardware requirements or enables even larger and more complex AI systems.

For now, the easing of DDR5 prices reflects an early adjustment in a market that has been heavily influenced by AI-driven expectations. It also highlights how advances in software efficiency can have immediate ripple effects across hardware supply chains.

AI & Machine Learning, Cloud & Infrastructure, Enterprise Tech, News
Exit mobile version