OpenAI Launches Workspace Agents in ChatGPT for Business Teams

OpenAI has introduced workspace agents in ChatGPT, letting business teams build and share AI agents that autonomously handle multi-step workflows across tools like Slack and internal systems.

By Daniel Mercer Edited by Maria Konash Published: Updated:
OpenAI launches ChatGPT workspace agents, enabling teams to automate workflows and collaborate across tools. Image: OpenAI

OpenAI on Thursday unveiled workspace agents in ChatGPT, a new feature that allows business teams to build, share, and deploy AI agents capable of handling complex, multi-step workflows without constant human oversight. Available now in research preview for ChatGPT Business, Enterprise, Edu, and Teachers plan subscribers, the agents are powered by Codex and run in the cloud, meaning they continue working even when users are offline. The launch signals OpenAI’s push to move beyond single-user productivity tools and into organization-wide workflow automation.

Unlike the existing GPTs feature, which workspace agents are designed to eventually replace, these agents can operate across connected tools, retain memory between sessions, execute code, and take action inside third-party platforms. Teams can deploy agents directly within ChatGPT or integrate them into Slack, where they can respond to requests, answer questions, and file tickets autonomously. OpenAI cited its own internal use cases as examples: its sales team uses an agent to pull call notes, qualify leads, and draft follow-up emails, while its accounting team uses one to assist with month-end close processes including journal entries and variance analysis.

Setup is designed to be accessible to non-technical users. Team members describe a workflow in plain language and ChatGPT guides them through building the agent, connecting relevant tools, and testing the output. Pre-built templates are available for finance, sales, and marketing use cases. Agents can be scheduled to run at set intervals or triggered by incoming messages. For sensitive actions, such as sending emails or editing files, administrators can require human approval before the agent proceeds.

The Bigger Picture

Workspace agents represent a meaningful shift in how AI fits into business operations. Until now, most enterprise AI tools have focused on helping individual employees work faster. This feature targets the layer above that: the handoffs, approvals, and coordination work that spans teams and systems. For businesses, the practical implication is that routine but high-effort processes, lead qualification, vendor screening, weekly reporting, can be delegated to an agent that follows set policies, logs its activity, and improves with use. Early tester Rippling reported that a workflow previously taking sales reps five to six hours per week now runs automatically, built by a single sales consultant without engineering support.

What Came Before

OpenAI introduced GPTs in late 2023 as customizable versions of ChatGPT for specific tasks, but they were largely limited to single-session, single-user interactions. Workspace agents extend that foundation with persistent memory, cloud execution, and enterprise controls including role-based access, a Compliance API for admin oversight, and built-in defenses against prompt injection attacks.

The move puts OpenAI in more direct competition with enterprise automation platforms such as Microsoft Copilot, Salesforce Agentforce, and ServiceNow, all of which have been building agent-based workflow tools for business customers. Workspace agents will be free through May 6, 2026, after which credit-based pricing takes effect.

Meta Expands AWS Partnership With Massive Graviton Deployment

Meta will deploy tens of millions of AWS Graviton cores to power its next generation of AI systems. The deal highlights rising demand for CPU-driven infrastructure alongside GPUs.

By Olivia Grant Edited by Maria Konash Published:

Meta has signed an agreement with Amazon Web Services to deploy tens of millions of Graviton processor cores, marking a major expansion of their long-standing partnership. The deployment will support Meta’s next generation of artificial intelligence systems and is expected to scale further over time. The move positions Meta as one of the largest customers of AWS’s custom-designed Graviton chips. It comes as demand for AI infrastructure grows rapidly, particularly for systems that require real-time processing and coordination.

The chips will power a range of workloads across Meta’s platforms, including AI systems that handle billions of user interactions. While graphics processing units remain central to training large AI models, the rise of agentic AI has increased demand for CPU-based computing. These workloads include real-time reasoning, code generation, and orchestrating multi-step processes. AWS’s Graviton5 is designed for such tasks, featuring 192 cores and significantly expanded cache to improve data flow and reduce latency.

Graviton processors are built on the AWS Nitro System, which combines dedicated hardware and software to deliver high performance and security. The infrastructure also supports features such as low-latency communication between compute instances, enabling distributed AI workloads to run efficiently. Meta has previously relied on AWS services, including large-scale use of its AI tools, and this agreement deepens that relationship. The deployment also aligns with Meta’s strategy to diversify its compute resources as it scales AI capabilities.

Infrastructure Shift

The deal reflects a broader shift in how AI infrastructure is designed. While GPUs dominate model training, many emerging AI applications require sustained, high-volume processing that is better suited to CPUs. Agentic AI systems, which can plan and execute multi-step tasks autonomously, rely heavily on this type of compute. By investing in purpose-built chips like Graviton, companies can optimize performance while managing costs more effectively.

For businesses, this trend signals a more complex infrastructure landscape, where different types of processors are used for specific workloads. It may also influence how cloud providers package and price AI services, as demand grows for specialized compute resources. For end users, improved infrastructure can enable faster and more responsive AI-driven features across platforms.

The Road Ahead

The expansion underscores the increasing importance of custom silicon in the AI race. AWS designs Graviton chips to be more energy efficient and cost-effective than traditional processors, with the latest generation delivering up to 25% performance gains. Built on advanced manufacturing processes, these chips help address both cost pressures and sustainability goals as AI workloads scale.

As AI adoption accelerates, infrastructure efficiency is becoming a key competitive factor. Companies like Meta are balancing performance, cost, and energy use while building systems capable of supporting billions of interactions. The partnership with AWS suggests that purpose-built processors will play a larger role in future AI deployments, shaping how large-scale systems are developed and operated.

Cohere Plans Aleph Alpha Acquisition to Expand Europe Presence

Cohere plans to acquire Aleph Alpha and secure a $600 million investment to expand its footprint in Europe. The deal targets demand for sovereign AI in regulated industries.

By Samantha Reed Edited by Maria Konash Published:
Cohere plans to acquire Aleph Alpha and raise $600M, expanding sovereign AI in Europe. Image: Cohere

Cohere said it plans to acquire Aleph Alpha as part of a push to expand its presence in Europe. The proposed deal, which has not yet closed and remains subject to regulatory approval, will also bring new funding. Schwarz Group, a key backer of Aleph Alpha, intends to invest $600 million into Cohere’s upcoming Series E round. The company expects to complete that funding round in 2026, according to a source familiar with the matter.

Cohere, founded in 2019, has raised about $1.6 billion to date from investors including Nvidia and AMD, and was valued at $7 billion in 2025. The acquisition would strengthen its ability to deliver customized AI systems tailored for regulated sectors such as government, finance, defense, and healthcare. Aleph Alpha brings established relationships with German public sector clients, including work with federal and regional authorities, which could accelerate Cohere’s entry into Europe’s largest economy.

The deal also reflects a strategic shift toward so-called sovereign AI, where organizations retain greater control over data, infrastructure, and deployment. Aleph Alpha, founded in 2019, initially focused on building large language models before pivoting to enterprise applications. It has raised more than $600 million in funding, including grants, and built a presence in Europe’s public sector. Cohere said combining capabilities would enhance its ability to meet demand for secure and compliant AI systems.

Strategic Implications

The planned acquisition highlights growing demand for AI solutions that meet strict regulatory and data sovereignty requirements. Governments and enterprises are increasingly seeking alternatives to global cloud providers that allow greater control over sensitive data. By integrating Aleph Alpha’s regional expertise, Cohere could position itself as a provider of localized AI infrastructure tailored to European standards.

The additional $600 million investment from Schwarz Group also signals continued investor confidence in enterprise-focused AI companies. Access to capital will be critical as firms compete to build infrastructure, scale operations, and meet rising demand for customized AI deployments. For customers, the deal could expand access to AI systems designed specifically for compliance-heavy industries.

European Market Dynamics

Europe has become a focal point for AI development shaped by regulation, including strict data protection and emerging AI governance frameworks. This environment has encouraged the growth of providers that emphasize transparency, security, and local control. Aleph Alpha’s existing contracts with German government entities highlight the importance of trusted domestic partnerships in this market.

For Cohere, the acquisition offers a faster route into Europe compared with building operations from scratch. It also positions the company against both U.S. and regional competitors seeking to capture enterprise AI demand. As governments and corporations prioritize sovereignty and compliance, partnerships like this may become more common, reshaping how AI services are delivered across regions.

AI & Machine Learning, Enterprise Tech, News

DeepSeek Unveils V4 Model With Lower AI Costs

DeepSeek has released a preview of its V4 language model, highlighting lower inference costs and strong performance. The launch intensifies competition in China’s fast-growing AI market.

By Daniel Mercer Edited by Maria Konash Published:
DeepSeek V4 debuts with lower inference costs and strong performance, intensifying China’s AI race. Image: Solen Feyissa / Unsplash

DeepSeek has released a preview version of its V4 large language model, offering developers early access to its latest capabilities. The Hangzhou-based company said the model is available in both “pro” and “flash” versions, designed for different performance and size requirements. Like its earlier releases, V4 is open-source, allowing users to download, modify, and run the model locally. The launch follows more than a year after DeepSeek introduced its R1 reasoning model, which drew global attention for its performance and low development cost.

DeepSeek claims that V4 delivers strong results in agent-based tasks, knowledge processing, and inference, which refers to the computational cost of generating outputs from a trained model. Analysts at Counterpoint Research said the model shows improved efficiency, with lower inference costs compared to earlier versions. The system has also been optimized to work with agent tools such as Claude Code from Anthropic and OpenClaw. These integrations point to a growing focus on AI agents, which automate multi-step tasks using language models.

The release comes as competition in China’s AI sector accelerates. Companies including Alibaba and ByteDance have launched new models this year, intensifying rivalry in both enterprise and open-source segments. Market reactions reflected the shifting landscape, with shares of firms such as MiniMax and Zhipu declining, while chip manufacturers like SMIC and Hua Hong Semiconductor rose following the announcement.

Cost and Capability Shift

DeepSeek’s V4 reinforces a key trend in AI development: improving performance while reducing costs. Lower inference costs make AI tools more accessible to businesses, particularly for applications that require continuous or large-scale usage. For developers, open-source availability allows customization and deployment without relying on centralized providers, potentially accelerating innovation.

The model’s reported ability to run on domestic chips could also shift the balance in the global AI supply chain. If widely adopted, this capability may reduce dependence on U.S.-based hardware providers such as Nvidia. That would support China’s push for greater technological self-sufficiency and reshape how AI infrastructure is built and deployed worldwide.

China’s AI Arms Race

DeepSeek first gained prominence with its V3 model in late 2024 and the R1 reasoning model in early 2025, which reportedly matched or exceeded leading systems at a fraction of the cost. The company said R1 was developed in about two months for under $6 million using lower-capacity chips, raising questions about the scale of spending by larger AI players.

Since then, the market has evolved, with investors and companies increasingly recognizing that Chinese AI developers can compete on both cost and capability. Analysts suggest V4 is unlikely to trigger the same market shock as R1, as expectations have adjusted. However, the model’s positioning against domestic rivals highlights how quickly competition within China has intensified, signaling a more crowded and mature AI ecosystem.

AI & Machine Learning, News

OpenAI Introduces GPT-5.5 as Its Most Capable Model for Real Work Yet

OpenAI has launched GPT-5.5, a new flagship model designed for coding, computer use, knowledge work, and scientific research, with stronger performance, lower token usage, and broader real-world autonomy than GPT-5.4.

By Maria Konash Edited by AIstify Team Published: Updated:
OpenAI has introduced GPT-5.5 as a new class of intelligence for real work, combining stronger coding, reasoning, and computer-use abilities with faster, more efficient performance. Photo: OpenAI

OpenAI has just launched GPT-5.5, a major new model release that the company describes as its most capable system yet for real-world work. The model is designed to move beyond traditional chatbot interactions and into sustained execution of complex, multi-step tasks across software development, research, business operations, and data analysis.

The release reflects a broader shift inside OpenAI toward building systems that act less like assistants and more like collaborators. “GPT-5.5 is built for real work,” the company said, emphasizing its ability to plan, execute, and refine tasks across long time horizons while maintaining coherence and accuracy.

At its core, GPT-5.5 is optimized for coding, computer use, knowledge work, and scientific reasoning, areas where the company says previous models still required significant human supervision. The goal, according to OpenAI, is to close the gap between what frontier models can theoretically do and what they can reliably deliver in practice.

A Leap in Coding, Reasoning, and Execution

GPT-5.5 shows measurable gains across major industry benchmarks. On Terminal-Bench 2.0, which evaluates command-line workflows requiring tool use and planning, the model achieves 82.7 percent, up from 75.1 percent in GPT-5.4. On SWE-Bench Pro, a widely used benchmark for real-world software engineering, it reaches 58.6 percent, again improving on its predecessor.

These improvements translate into tangible gains for developers. OpenAI says GPT-5.5 is better at understanding the structure of large codebases, identifying root causes of failures, and implementing fixes that work across multiple files and systems. Early testers described the model as more reliable in “end-to-end engineering tasks,” where success depends on coordinating multiple steps rather than producing isolated snippets.

One tester noted that GPT-5.5 “feels like it understands the system, not just the code,” highlighting a shift toward deeper reasoning and contextual awareness.

The model also advances OpenAI’s broader push toward agentic workflows, where AI systems can independently complete tasks across tools. On OSWorld-Verified, a benchmark that measures real-world computer use, GPT-5.5 scores 78.7 percent, demonstrating its ability to operate software environments with minimal human intervention.

From Productivity Tool to Economic Engine

The company says the biggest impact of GPT-5.5 may be in knowledge work, where it can generate presentations, build spreadsheets, analyze data, and produce structured outputs at scale. On GDPval, OpenAI’s benchmark covering 44 occupations, GPT-5.5 reaches 84.9 percent, outperforming GPT-5.4 and approaching expert-level performance across a wide range of tasks.

“GPT-5.5 is better at producing real work products, not just answers,” OpenAI said. “It can generate deliverables that are closer to what a professional would produce.”

The model is also more efficient. OpenAI says GPT-5.5 achieves higher quality results with fewer tokens, reducing the number of iterations needed to complete a task. This efficiency lowers the cost of reaching a given level of output quality, even as the model itself becomes more advanced.

Inside OpenAI, the shift is already visible. The company reports that more than 85 percent of employees now use Codex weekly, applying AI to tasks across engineering, finance, marketing, and communications. In one example, teams used GPT-5.5 to analyze speaking request data and generate structured reports, saving several hours per week per employee.

“This is where AI becomes infrastructure,” OpenAI said, describing the model as a system that supports entire workflows rather than isolated tasks.

Advancing Scientific Discovery

Beyond enterprise use, GPT-5.5 is also pushing into scientific research. On GeneBench, a benchmark focused on genetics and quantitative biology, the model shows significant improvement over previous versions. OpenAI says it is better at exploring hypotheses, interpreting ambiguous results, and iterating across complex research workflows.

In one internal experiment, a customized version of GPT-5.5 contributed to discovering a new proof related to Ramsey numbers, a core concept in combinatorics. The result was later verified by researchers, illustrating how AI can assist in advancing mathematical knowledge under human supervision.

“We’re beginning to see AI meaningfully accelerate science,” the company said, while noting that human oversight remains essential.

Safety, Security, and Deployment

OpenAI also highlighted improvements in factual reliability and safety. GPT-5.5 reduces error rates compared to GPT-5.4 and includes stricter safeguards for high-risk domains, particularly cybersecurity. The company says it is expanding controlled access to cyber-related capabilities through its Trusted Access for Cyber program while maintaining tighter usage controls.

“Security and alignment are core to how we deploy these systems,” OpenAI said, adding that stronger guardrails are necessary as models gain more autonomy.

GPT-5.5 is now rolling out to ChatGPT Plus, Pro, Business, and Enterprise users, with GPT-5.5 Pro available for higher-complexity tasks. API access is expected to follow, with pricing set at $5 per million input tokens and $30 per million output tokens, while Pro usage carries higher rates.

The model was trained using infrastructure developed in collaboration with Microsoft and NVIDIA, leveraging Azure data centers and GPU systems including H100, H200, and next-generation architectures.

Toward AI That Can Work End-to-End

For OpenAI, GPT-5.5 represents more than another incremental release. It signals a transition toward AI systems capable of carrying real work from start to finish.

“Everything is controlled by code,” the company said. “The better an agent is at reasoning about and producing code, the more capable it becomes across all forms of work.”

With GPT-5.5, OpenAI is betting that the future of AI will be defined not just by intelligence, but by execution — systems that can plan, act, and deliver outcomes at scale.

Anthropic Outperforms OpenAI with $1 Trillion Valuation on Secondary Markets

Anthropic has reached a $1 trillion implied valuation on secondary markets, surpassing OpenAI amid rapid revenue growth. The surge highlights strong investor demand but raises questions ahead of a potential IPO.

By Samantha Reed Edited by Maria Konash Published:
Microsoft expands AI investment in Australia, boosting cloud, cybersecurity, and workforce training. Image: Anthropic

Anthropic has reached an implied valuation of $1 trillion on secondary markets, overtaking OpenAI to become the most valuable private AI company by this measure. The pricing, reported on platforms such as Forge Global and cited by Business Insider, reflects strong demand for limited shares. Anthropic’s rise comes as investor interest in artificial intelligence companies intensifies globally. The valuation, however, is based on private share trades rather than a formal funding round or public listing.

A key driver behind the surge is the company’s rapid revenue expansion. According to Bloomberg, Anthropic’s annualized revenue increased from $9 billion at the end of 2025 to $30 billion by March 2026, representing a 233 percent jump in one quarter. Much of this growth has been attributed to demand for AI-powered coding tools, a segment seeing strong enterprise adoption. The sharp increase in revenue has strengthened investor confidence and pushed secondary market pricing higher.

Supply constraints have also played a significant role in inflating valuations. Shares in Anthropic remain tightly held, with employees and early investors having limited opportunities to sell. This scarcity has led to competitive bidding among buyers, pushing some individual offers as high as $1.15 trillion, above the roughly $1 trillion average on secondary platforms. Such dynamics are common in private markets, where pricing can be influenced by limited liquidity rather than broad investor consensus.

What It Means

The implied valuation underscores how quickly capital is flowing into leading AI companies, particularly those demonstrating strong revenue growth. For businesses, this signals intensifying competition in AI tools, especially in high-demand areas like software development automation. Investors may view Anthropic’s performance as a benchmark for the sector, potentially influencing valuations of other private AI firms.

At the same time, the gap between secondary market pricing and expected IPO valuation highlights uncertainty. Reports suggest Anthropic is targeting a public offering in the $400 billion to $500 billion range, significantly below current private market estimates. If accurate, this discrepancy could lead to repricing when shares become publicly traded, affecting investor expectations across the AI market.

The Bigger Picture

Secondary market valuations have historically diverged from eventual public market outcomes. During the 2021 market peak, many private technology companies traded at elevated valuations before experiencing corrections of 60 to 70 percent between 2022 and 2024. This precedent suggests caution in interpreting current pricing as a definitive measure of long-term value.

Anthropic is reportedly working with major banks including Goldman Sachs and JPMorgan on a potential IPO as early as October 2026. The company’s eventual S-1 filing will provide clearer insight into its financials and valuation framework. Until then, secondary market activity offers a snapshot of investor sentiment, but not a final verdict on the company’s worth.

AI & Machine Learning, News, Startups & Investment
Exit mobile version