OpenAI Launches Codex Plugins With Slack and Notion Integrations

OpenAI has launched plugin support for Codex, enabling integrations with tools like Slack, Notion, and Gmail as it builds an ecosystem for AI-driven workflows.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI Launches Codex Plugins With Slack and Notion Integrations
OpenAI launches Codex plugins with Slack, Notion, Figma, and Gmail, expanding into a full workflow ecosystem. Image: Joshua Reddekopp / Unsplash

OpenAI has introduced plugin support for Codex, expanding its development tool into a broader platform for AI-driven workflows with integrations across popular workplace applications.

The new feature allows users to connect Codex with services including Slack, Notion, Figma, Gmail, and Google Drive. Through these integrations, Codex can access external data, automate tasks, and execute workflows that extend beyond traditional code generation.

The launch also marks the beginning of a plugin marketplace strategy, where reusable AI workflows can be distributed and adopted across teams with minimal setup.

Building an AI Workflow Ecosystem

Plugins in Codex are designed as bundled units that combine predefined workflows, integrations with external applications, and support for Model Context Protocol servers. This structure allows developers and teams to create reusable configurations tailored to specific tasks.

For example, Codex can be used to summarize Slack channels, manage documents in Google Drive, or generate and modify designs through Figma integrations. These capabilities position the tool as more than a coding assistant, enabling it to function as a general-purpose productivity layer across enterprise environments.

Previously, similar workflows required manual configuration and technical expertise. With the introduction of plugins, users can install and deploy these capabilities through a centralized directory, lowering the barrier to adoption.

The approach aligns with a broader shift in the AI sector toward agent-based systems that can execute multi-step tasks across different tools and services.

Competing in the AI Platform Race

The expansion of Codex into a plugin-enabled platform reflects increasing competition among AI providers to build extensible ecosystems. Rivals have already emphasized integrations and modular architectures, particularly for enterprise use cases.

By launching a plugin marketplace, OpenAI is aiming to create a network effect around Codex, where third-party developers can contribute tools and workflows that enhance the platform’s capabilities. This model mirrors strategies seen in cloud software and developer platforms, where ecosystems play a key role in driving adoption.

The inclusion of widely used services such as Slack, Notion, and Gmail highlights a focus on real-world productivity use cases. It also signals a move toward embedding AI more deeply into everyday workflows, rather than limiting it to isolated development tasks.

As organizations increasingly adopt AI agents to automate complex processes, tools like Codex are evolving to serve as coordination layers across software environments. The addition of plugins positions OpenAI to capture a larger share of this emerging market for AI-powered work platforms.

AI & Machine Learning, News

Why SpaceX’s IPO Could Be Unlike Anything Before

SpaceX is preparing a record-breaking IPO that could raise $75 billion, combining its space business with AI ambitions through xAI integration.

By Maria Konash Published:
Why SpaceX’s IPO Could Be Unlike Anything Before
SpaceX eyes $75B IPO at $1.75T valuation, blending space and AI via xAI. Image: SpaceX

SpaceX is preparing what could become the largest initial public offering in history, with plans to raise as much as $75 billion at a valuation of approximately $1.75 trillion.

If successful, the offering would surpass the total capital raised across all U.S. IPOs in recent years and mark the first time a company debuts publicly with a valuation exceeding $1 trillion. The scale of the deal is expected to test investor appetite for high-growth, capital-intensive technology companies, particularly those tied to artificial intelligence.

The IPO also comes at a pivotal moment for the broader AI industry, as companies such as OpenAI and Anthropic consider their own paths to public markets.

A New Hybrid of Space and AI

Unlike traditional IPO candidates, SpaceX is no longer just a space company. Elon Musk has consolidated multiple ventures, including artificial intelligence firm xAI, into a single structure ahead of the listing.

The integration reflects a broader strategy to combine space infrastructure with AI capabilities. Musk has outlined ambitions that include deploying large-scale computing capacity in orbit, potentially enabling new forms of data processing and AI services.

However, the combined entity presents challenges for investors. The newly formed structure lacks a long, unified financial history, making it difficult to evaluate performance using conventional metrics. Much of the company’s future value is tied to projects that remain in early or conceptual stages.

Unconventional IPO Structure

The offering is expected to break with several norms. Reports suggest that up to 30% of shares could be allocated to retail investors, significantly higher than typical IPO allocations.

At the same time, the company’s financial profile may include substantial losses tied to its AI operations. Like other firms developing large-scale AI systems, xAI is believed to be spending heavily on computing infrastructure and model development.

Despite this, investor demand may remain strong, driven by continued enthusiasm for AI and the potential long-term value of integrated space and computing platforms.

Implications for the AI Market

The SpaceX IPO could serve as a key benchmark for how public markets evaluate next-generation AI companies. Its outcome may influence the timing and structure of potential listings by OpenAI and Anthropic, both of which are scaling enterprise offerings and infrastructure investments ahead of possible IPOs.

The deal also highlights the growing convergence between AI and physical infrastructure, as companies invest heavily in data centers, chips, and new computing environments to support advanced models.

Ultimately, SpaceX’s public debut represents more than a fundraising event. It is a test of whether investors are willing to back a combined vision of space and AI at unprecedented scale, despite limited financial transparency and significant execution risks.

AI & Machine Learning, News, Startups & Investment

Microsoft Adds Multi-Model AI Workflows to Copilot

Microsoft has introduced multi-model capabilities in Copilot, allowing GPT and Claude to collaborate on responses to improve accuracy and reliability.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Microsoft Adds Multi-Model AI Workflows to Copilot
Microsoft upgrades Copilot with GPT and Claude workflows, boosting accuracy and reducing hallucinations. Image: Microsoft

Microsoft has introduced new multi-model capabilities to its Copilot assistant, enabling users to leverage multiple artificial intelligence systems within a single workflow as competition intensifies in the enterprise AI market.

The update allows Copilot’s Researcher agent to combine outputs from OpenAI’s GPT models and Anthropic’s Claude, marking a shift from relying on a single model to a collaborative AI approach. The company said the feature is designed to improve accuracy, reduce errors, and enhance overall productivity.

The move reflects a broader industry trend toward integrating multiple AI systems to balance strengths and mitigate weaknesses, particularly as businesses increasingly depend on AI for critical workflows.

AI Models Collaborate on Responses

At the core of the update is a new feature called “Critique.” In this workflow, GPT generates an initial response, which is then reviewed by Claude for quality and accuracy before being delivered to the user.

Microsoft said this layered approach helps address one of the key challenges in generative AI, known as hallucinations, where models produce incorrect or misleading information. By introducing a second model as a reviewer, the system aims to provide more reliable outputs.

The company plans to expand this capability further by making the process bi-directional, allowing GPT to also review Claude-generated responses. This would create a feedback loop between models, potentially improving performance over time.

Toward Multi-Model AI Systems

Microsoft is also launching a feature called “Model Council,” which allows users to compare outputs from different AI models side by side. This gives users greater visibility into how different systems interpret the same query and enables more informed decision-making.

The updates are part of Microsoft’s broader effort to evolve Copilot into a more advanced agentic system capable of handling complex, multi-step tasks. The company has been expanding access to Copilot Cowork, an AI agent designed to assist with collaborative workflows across enterprise environments.

The introduction of multi-model functionality highlights a shift in strategy, where AI tools are no longer tied to a single provider or architecture. Instead, platforms are increasingly designed to orchestrate multiple models to deliver better results.

Microsoft faces growing competition from other AI providers, including Google’s Gemini and Anthropic’s enterprise-focused tools. By enabling collaboration between leading models, the company is positioning Copilot as a flexible platform that can integrate capabilities from across the AI ecosystem.

The latest updates underscore the importance of reliability and interoperability in enterprise AI adoption, as organizations seek systems that can deliver consistent and trustworthy results at scale. The expansion also aligns with Microsoft’s broader push into applied AI, including the launch of Copilot Health, a secure assistant designed to analyze medical records, wearable data, and health history to deliver personalized health insights.

DDR5 Prices Fall as Google TurboQuant Reshapes AI Memory Demand

DDR5 memory prices are showing early signs of decline after Google’s TurboQuant algorithm reduced AI memory requirements, easing pressure on global DRAM supply.

By Olivia Grant Edited by Maria Konash Published:
DDR5 Prices Fall as Google TurboQuant Reshapes AI Memory Demand
DDR5 prices dip as TurboQuant cuts AI memory demand, easing post-OpenAI surge. Image: Liam Briese / Unsplash

DDR5 memory prices are beginning to show signs of easing after a prolonged surge driven by artificial intelligence demand, with analysts pointing to a recent breakthrough in AI efficiency as a key turning point.

The shift follows the introduction of TurboQuant, a compression algorithm unveiled by Google that significantly reduces the memory requirements of large AI models. By lowering demand for high-bandwidth memory and DRAM, the development is starting to rebalance a market that had been under intense pressure from AI infrastructure expansion.

The price movement marks a rare reversal after a sharp increase in 2025, when expectations around AI-driven demand pushed memory costs to record levels.

AI Demand Fueled Price Surge

Last year, the market reacted strongly to reports that OpenAI had signed preliminary agreements with major memory manufacturers Samsung and SK Hynix for up to 40% of global DRAM output. Although the agreements were non-binding letters of intent rather than firm purchase commitments, they were widely interpreted as indicative of massive future demand.

That perception drove DDR5 prices up by as much as 171%, with high-capacity memory kits becoming significantly more expensive. The surge also reflected broader investment in AI data centers, where memory is a critical component for training and running large-scale models.

However, some large infrastructure projects later faced delays or cancellations amid uncertainty over actual demand, contributing to growing volatility in the memory market.

TurboQuant Shifts Market Dynamics

The release of Google’s TurboQuant algorithm has introduced a new variable into the equation. The technology reduces key-value cache memory requirements by up to six times while maintaining performance, potentially lowering the amount of DRAM needed for AI workloads.

This improvement could have a direct impact on data center design, enabling operators to run large models with fewer memory modules. As a result, some supply may shift back toward consumer markets, including gaming and personal computing.

Early signs of this shift are emerging. In the United States, certain DDR5 modules, including Corsair Vengeance kits, have seen modest price declines at major retailers. Similar trends have been reported in parts of Europe, suggesting a broader stabilization in pricing.

Limited Relief for Consumers

Despite these developments, the overall memory market remains constrained. Most DRAM supply continues to be prioritized for enterprise customers, particularly hyperscalers building AI infrastructure.

Industry trackers indicate that while prices are leveling off, widespread declines have yet to materialize across all products. Analysts caution that improvements in efficiency could paradoxically drive further AI adoption, sustaining long-term demand for memory.

The broader impact of TurboQuant may depend on how quickly it is adopted and whether it leads to a net reduction in hardware requirements or enables even larger and more complex AI systems.

For now, the easing of DDR5 prices reflects an early adjustment in a market that has been heavily influenced by AI-driven expectations. It also highlights how advances in software efficiency can have immediate ripple effects across hardware supply chains.

AI & Machine Learning, Cloud & Infrastructure, Enterprise Tech, News

Anthropic Leak Reveals New Claude Mythos Model

A data leak at Anthropic exposed details of its upcoming Claude Mythos model, described as a major leap in AI capabilities, along with internal documents.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Leak Reveals New Claude Mythos Model
Anthropic leak reveals Claude Mythos, an advanced reasoning and cybersecurity AI model. Image: Anthropic

Anthropic has confirmed details of a forthcoming AI model after a security lapse exposed internal documents, revealing what the company describes as a significant advancement in its Claude family of systems.

The leak, caused by a configuration error in Anthropic’s content management system, made nearly 3,000 unpublished assets publicly accessible. The exposed data included draft blog posts, images, and internal PDFs. Security researchers identified the issue and alerted the company, which then restricted access.

Anthropic said the incident resulted from “human error” and described the materials as early drafts intended for future publication.

New Model Tier Above Opus

Among the leaked documents was information about a new model referred to as Claude Mythos, internally codenamed “Capybara.” The model is expected to introduce a new tier above Anthropic’s current lineup, which includes Opus, Sonnet, and Haiku.

According to the draft materials, the new system is designed to be more capable than the existing Opus models, particularly in areas such as coding, academic reasoning, and cybersecurity. Anthropic confirmed it is developing a next-generation general-purpose model and described it as a “step change” in capability.

The addition of a higher-tier model suggests Anthropic is continuing to scale its systems in response to growing competition in advanced AI, particularly in enterprise and technical domains.

Cybersecurity Concerns and Controlled Release

The leaked documents highlighted cybersecurity as a key area of focus for the new model. Anthropic reportedly considers its capabilities in this domain to be significantly ahead of existing systems, raising concerns about potential misuse.

To address these risks, the company plans to limit early access to organizations focused on cybersecurity defense. This approach is intended to allow institutions to strengthen protections before broader deployment.

Anthropic has previously taken steps to mitigate misuse of its models, including blocking attempts to use its tools for cybercrime. The enhanced capabilities described in the leaked materials indicate a growing emphasis on both offensive and defensive implications of AI systems.

Broader Implications and Internal Exposure

In addition to model details, the leak revealed plans for internal events, including an invite-only gathering for European business leaders. The exposure of such materials underscores the risks associated with managing sensitive information in rapidly evolving AI organizations.

The incident comes at a time when Anthropic is expanding its influence in the AI sector, with increased enterprise adoption and ongoing infrastructure investments. It also aligns with broader strategic developments, as the company is reportedly targeting an IPO as early as October while intensifying its enterprise push and scaling infrastructure to compete more directly with OpenAI.

While Anthropic has moved quickly to secure the exposed data, the leak provides an early look at its next-generation model strategy. It also illustrates how operational vulnerabilities can expose critical information in an industry where technological advances are closely watched.

SoftBank Secures $40B Loan to Expand OpenAI Investment

SoftBank has secured a $40 billion bridge loan to deepen its investment in OpenAI and accelerate its broader AI strategy.

By Samantha Reed Edited by Maria Konash Published:
SoftBank Secures $40B Loan to Expand OpenAI Investment
SoftBank secures $40B loan to double down on OpenAI and AI infrastructure. Image: insung yoon / Unsplash

SoftBank Group has secured a $40 billion bridge loan to fund its growing investments in artificial intelligence, including a deeper commitment to OpenAI, as competition intensifies across the sector.

The Japanese investment firm said the unsecured loan will be used to support its AI strategy and general corporate purposes. The financing, which matures in March 2027, was arranged by a group of major lenders including JPMorgan Chase, Goldman Sachs, Mizuho Bank, Sumitomo Mitsui Banking Corporation, and MUFG Bank.

The move marks one of SoftBank’s largest financing efforts in recent years and highlights founder Masayoshi Son’s renewed focus on AI following a period of volatility in the company’s Vision Fund performance.

Expanding Partnership With OpenAI

SoftBank has been steadily increasing its exposure to OpenAI, the developer of ChatGPT, as generative AI adoption accelerates globally. The company previously committed $30 billion to OpenAI through its Vision Fund 2, positioning itself among the largest investors in the space.

The new financing is expected to further strengthen that relationship, as SoftBank seeks to capitalize on the rapid growth of AI-driven applications and infrastructure. OpenAI, backed by Microsoft, has emerged as a central player in the industry, attracting significant enterprise demand and investor interest.

SoftBank and OpenAI have also collaborated on large-scale initiatives, including the Stargate Project, which aims to invest up to $500 billion in AI infrastructure in the United States over four years. The project reflects the increasing importance of computing capacity and data centers in supporting advanced AI systems.

Strategic Shift Toward AI Infrastructure

The loan underscores SoftBank’s broader strategy to position itself at the center of the AI ecosystem, spanning both software and infrastructure investments. The company has signaled plans to deploy substantial capital into AI-related projects, including a previously announced $100 billion investment in U.S. technology and infrastructure.

This approach aligns with a wider industry trend, where companies are investing heavily in data centers, chips, and cloud platforms to support the growing computational demands of AI models.

SoftBank’s renewed focus on AI comes after years of mixed performance from its Vision Fund, which saw both significant gains and losses across technology investments. By concentrating on AI, the firm is betting on a sector widely viewed as a key driver of future economic growth.

The scale of the financing also reflects the capital-intensive nature of AI development. As companies race to build more powerful systems, access to funding and infrastructure is becoming a critical competitive factor.

AI & Machine Learning, Enterprise Tech, News, Startups & Investment