OpenAI Rolls Out Pilot Group Chat Feature for ChatGPT
OpenAI introduces group chat for ChatGPT, letting users collaborate in shared conversations with GPT‑5.1 Auto handling responses, image generation, and more.
By Daniel MercerEdited by Maria Konash
Published:
Updated:
OpenAI introduces group chat for ChatGPT, letting users collaborate in shared conversations.
Photo: openai.com
OpenAI on Thursday launched a group chat feature for ChatGPT, currently available in pilot regions including Japan, New Zealand, South Korea, and Taiwan. The feature allows users to collaborate directly in the app, offering a shared AI-assisted experience while keeping private chats and personal ChatGPT memory secure.
The pilot is open to Free, Plus, and Team users on both mobile and web. Groups can include 1 to 20 participants and are invitation-only, with members able to leave at any time. For users under 18, OpenAI has implemented extra safeguards and parental controls. Group chats are organized in a labeled sidebar for easy access, and starting one is as simple as tapping the people icon and adding participants directly or via a shared link.
Within the group, ChatGPT functions similarly to a standard conversation but is optimized for multiple participants. Powered by GPT‑5.1 Auto, the AI knows when to jump in or stay quiet, can generate images, respond with emojis, upload files, and handle dictation. Messages between human participants don’t count toward ChatGPT’s usage limits, which only apply when the AI responds.
This update is part of OpenAI’s broader push to evolve ChatGPT from a personal AI assistant into a more social and collaborative platform. It follows recent releases like Sora 2, a standalone AI-driven social media app with TikTok-style video feeds, parental controls, and messaging capabilities.
For a detailed overview of the GPT‑5.1 updates that power this and other ChatGPT improvements, including enhanced reasoning, conversational tone, and personalization features, see OpenAI GPT‑5.1 Update.
Anthropic has surpassed OpenAI in verified business customer adoption for the first time, according to new data from fintech firm Ramp. The shift highlights Anthropic’s growing traction among enterprise and technical customers.
By Samantha ReedEdited by Maria Konash
Published:
Anthropic surpasses OpenAI in verified business adoption as enterprise demand for Claude accelerates. Image: Anthropic
Anthropic has overtaken OpenAI in verified business customer adoption for the first time, according to new data from fintech platform Ramp.
Ramp’s latest AI Index, which analyzes expense data from more than 50,000 companies using its payment and finance platform, found that 34.4% of participating businesses are now paying for Anthropic services, compared with 32.3% for OpenAI. It marks the first time Anthropic has held the top position in the survey.
According to Ramp economist Ara Kharazian, Anthropic had already established a lead among highly technical industries including finance, technology, and professional services before expanding into broader enterprise categories.
The data also illustrates how rapidly Anthropic has grown over the past year. In May 2025, only 9% of surveyed businesses were paying for Anthropic products. That figure has since increased by 26 percentage points over a 12-month period. During the same timeframe, OpenAI’s business adoption share declined by roughly 1%, while overall enterprise adoption of AI products across the survey increased by 9%.
Kharazian said Anthropic’s strategy of initially focusing on technical customers and developer-oriented use cases helped establish stronger traction within enterprise environments before broader expansion through products such as Claude Cowork.
The findings align with broader market signals suggesting growing enterprise adoption of Claude models. OpenRouter usage rankings, which track another segment of AI users and developers, last showed OpenAI ahead of Anthropic in December 2025.
Ramp noted that the index is not a complete representation of the overall AI market because it only reflects companies using its platform. However, the dataset remains one of the largest publicly discussed indicators of verified commercial AI spending activity.
Enterprise AI Competition Shifts Toward Deployment And Reliability
The report highlights how competition between leading AI companies is increasingly being shaped by enterprise deployment rather than consumer visibility alone.
Anthropic has spent much of the past year expanding its presence in regulated industries and enterprise workflows, particularly in finance, cybersecurity, operations, and software development. Its Claude family of models has gained traction among businesses seeking longer context handling, coding assistance, and AI agents designed for workplace tasks.
The company has also aggressively expanded enterprise infrastructure and partnerships in recent months, including new deployment initiatives, financial services AI agents, cybersecurity tools, and large-scale compute agreements aimed at supporting business demand.
Meanwhile, OpenAI continues to maintain a dominant consumer footprint through ChatGPT while simultaneously pushing deeper into enterprise deployments through consulting partnerships, deployment services, and productivity integrations.
Amazon Launches Alexa for Shopping AI Assistant Across Its Store
Amazon has launched Alexa for Shopping, a generative AI assistant that combines Alexa+, Rufus, and customer shopping history to deliver personalized product recommendations, price tracking, and automated purchasing. The assistant is available across Amazon’s app, website, and Echo Show devices.
By Samantha ReedEdited by Maria Konash
Published:
Amazon launches Alexa for Shopping with AI search, price tracking, automated purchases, and personalized guides. Image: Rubaitul Azad / Unsplash
Amazon has introduced Alexa for Shopping, a new AI-powered shopping assistant designed to combine conversational AI, product expertise, and customer shopping history into a unified retail experience across Amazon’s app, website, and Echo Show devices.
The launch merges capabilities from Alexa+ and Amazon’s Rufus shopping assistant, which the company said helped more than 300 million customers research and compare products in 2025. Alexa for Shopping is now integrated directly into Amazon’s main search bar, allowing users to ask conversational questions, compare products, track orders, generate shopping guides, and automate purchases using natural language.
Amazon said the assistant continuously personalizes recommendations using browsing activity, purchase history, preferences, and conversations across Alexa-enabled devices. The company described the system as a persistent shopping layer that carries context between devices and sessions instead of resetting interactions each time a customer searches.
The assistant can create AI-generated category overviews, compare products side-by-side from search results, surface one-year price history charts, and automatically monitor products for price drops. Customers can also create “Scheduled Actions” that automate recurring shopping tasks such as replenishing household items, tracking book releases, or adding products to carts when prices reach specific targets.
Amazon is also expanding agentic shopping capabilities through Shop Direct and its “Buy for Me” feature. The system can discover products from external retailers and, for eligible items, complete purchases automatically using stored payment and shipping information.
The company said Alexa for Shopping can also generate personalized shopping guides for complex purchases such as laptops, TVs, or appliances by summarizing reviews, features, pricing differences, and category insights across Amazon and the broader web.
In addition to mobile and desktop support, Amazon is bringing the full Amazon storefront experience to Echo Show devices for the first time. Customers can browse and purchase products using voice commands, touch controls, or a combination of both.
Alexa for Shopping is rolling out to all U.S. customers this week and does not require a Prime membership, Alexa app subscription, or Echo device.
Amazon Pushes AI Deeper Into Commerce Automation
The launch marks one of Amazon’s most aggressive attempts yet to transform e-commerce from search-based navigation into AI-assisted decision making and task automation.
Rather than relying on keyword searches and static filters, Alexa for Shopping is designed to function as a persistent shopping assistant that remembers preferences, previous conversations, recurring purchases, family information, and shopping behavior across Amazon’s ecosystem.
Features such as automated cart-building, conversational product research, and price-triggered purchases move Amazon closer to agentic commerce systems where AI actively manages portions of the shopping process on behalf of users.
The integration of Rufus product intelligence with Alexa+ personalization also gives Amazon a broader contextual data advantage across retail, smart home devices, and media services.
Google Introduces Googlebook Laptops Built Around Gemini AI
Google has unveiled Googlebook, a new laptop category combining Android and ChromeOS technologies with Gemini AI integrated throughout the system. The devices introduce features such as Magic Pointer contextual actions and AI-generated desktop widgets.
Google has introduced Googlebook, a new category of laptops designed around Gemini AI and built using a combination of Android and ChromeOS technologies. The company said the devices are intended to shift laptops “from an operating system to an intelligence system,” with AI integrated directly into navigation, multitasking, and desktop interaction.
Googlebook devices run on Android 17 with a redesigned laptop-style interface while retaining integration with Google services and Chrome browsing capabilities. The company described the platform as a fusion of Android’s application ecosystem and ChromeOS infrastructure, optimized for Gemini-powered workflows and cross-device continuity.
One of the central features is Magic Pointer, a new cursor system developed with Google DeepMind. When users move the cursor over content, Gemini can suggest contextual actions automatically. For example, pointing at a date inside an email can trigger meeting creation, while selecting multiple images can generate AI-assisted visual compositions such as virtual furniture placement or outfit previews.
Google is also introducing “Create your Widget,” a system that lets users generate desktop widgets through natural language prompts. Gemini can pull information from Gmail, Calendar, search, reservations, reminders, and other Google services to build personalized dashboards dynamically.
The company said Googlebook is designed to function more fluidly across phones and laptops. Features such as Quick Access allow users to browse and use files stored on Android smartphones directly from the laptop without transferring files manually. Mobile apps can also run inside the desktop environment while preserving workflow continuity.
Googlebook hardware will be manufactured through partnerships with Acer, ASUS, Dell Technologies, HP Inc., and Lenovo. Google said the devices will feature premium materials and a new “glowbar” design element intended to visually distinguish Googlebook laptops.
The first Googlebook devices are scheduled to launch this fall.
Google Pushes Gemini Beyond Apps Into Operating Systems
The launch represents one of Google’s clearest attempts so far to position Gemini not simply as an assistant, but as the core interaction layer for future computing devices.
Rather than opening separate AI applications or chat interfaces, Googlebook integrates Gemini directly into the operating system itself through cursor interactions, contextual actions, dynamic widgets, and continuous multitasking support.
The Magic Pointer feature is especially notable because it changes the cursor from a passive navigation tool into an AI-aware interaction system capable of interpreting onscreen context in real time. That approach mirrors a broader industry shift toward embedding AI directly into operating systems and interface layers rather than treating it as an isolated chatbot.
Google also appears to be using Googlebook to unify parts of Android and ChromeOS development into a more integrated AI-first platform strategy.
AI Becomes Central To Personal Computing Competition
The announcement arrives as major technology companies increasingly compete to redesign personal computing around AI-native interfaces.
Laptop and desktop operating systems are evolving from application-centric environments toward systems where AI continuously interprets user context, predicts intent, and automates actions across workflows.
Googlebook positions Google more directly against AI-integrated computing initiatives from companies including Microsoft and Apple, both of which are also embedding generative AI deeper into operating systems and productivity ecosystems.
By combining Gemini with Android’s application ecosystem and Chrome’s browser dominance, Google is attempting to create a tightly integrated AI computing environment spanning phones, laptops, cloud services, and productivity tools. Meanwhile, OpenAI is reportedly accelerating development of its own AI-focused smartphone, which analyst Ming-Chi Kuo said could enter mass production as early as 2027.
Google Explores SpaceX Deal For Orbital Data Centers
Google is reportedly in talks with SpaceX and other launch providers as it explores deploying orbital data centers under its Project Suncatcher initiative. The discussions reflect growing interest in space-based AI infrastructure and computing capacity.
By Olivia GrantEdited by Maria Konash
Published:
Google explores SpaceX launch deal for orbital AI data centers as Project Suncatcher targets 2027 prototypes. Image: ActionVance / Unsplash
Google is reportedly in talks with SpaceX over a potential rocket launch agreement tied to the company’s efforts to develop orbital data centers, according to a Wall Street Journal report citing people familiar with the discussions.
The report said Google is also holding conversations with other rocket-launch providers as it evaluates infrastructure options for deploying computing systems in space. The initiative is connected to Google’s previously disclosed Project Suncatcher program, which aims to research space-based data center technology and launch two prototype satellites by early 2027.
Project Suncatcher was first revealed in November as part of Google’s long-term exploration of alternative AI infrastructure systems. The project focuses on whether orbital computing platforms could eventually help address growing energy, cooling, and land constraints associated with terrestrial AI data centers.
A partnership with SpaceX would mark another instance of Elon Musk cooperating commercially with AI rivals he has publicly criticized in the past. Musk has repeatedly attacked Google’s AI strategy while simultaneously expanding his own AI infrastructure ambitions through xAI and SpaceX.
Space-Based Computing Gains Attention In AI Industry
The idea of orbital data centers has shifted from theoretical research toward early-stage infrastructure planning as AI companies search for ways to overcome physical limitations facing existing compute expansion.
Space-based infrastructure offers several potential advantages, including access to uninterrupted solar energy, reduced land and cooling constraints, and theoretically massive long-term compute scalability if launch costs continue declining.
However, major technical challenges remain, including radiation exposure, hardware reliability, maintenance logistics, latency management, and the economics of deploying large-scale compute systems into orbit.
AI Infrastructure Race Expands Beyond Earth
Competition in artificial intelligence is expanding into infrastructure ownership and compute deployment strategy rather than focusing solely on model development.
Last week, Anthropicsigned an agreement to access the full compute capacity of SpaceXAI’s Colossus 1 supercomputer facility in Memphis, adding more than 220,000 NVIDIA GPUs to support Claude training and inference workloads. The partnership also included discussions around developing multiple gigawatts of orbital compute infrastructure.
The move followed Musk’s decision to mergexAI directly into SpaceX under a new SpaceXAI structure combining AI models, compute infrastructure, and aerospace operations into a single organization. Analysts said the consolidation could give SpaceXAI a unique advantage if orbital AI infrastructure becomes commercially feasible in the coming years.
U.S. Banks Rush To Fix Vulnerabilities Found By Anthropic Mythos
Major U.S. banks are rapidly patching software vulnerabilities uncovered by Anthropic’s Mythos AI model as concerns grow over AI-driven cybersecurity risks. The system is reportedly identifying weaknesses and attack chains at speeds beyond traditional security workflows.
By Maria Konash
Published:
U.S. banks speed up software patching after Anthropic’s Mythos AI uncovers widespread cybersecurity vulnerabilities. Image: David Vincent / Unsplash
Major U.S. banks are racing to patch IT system vulnerabilities identified by Anthropic’s powerful Mythos AI model, triggering urgent software upgrades and faster cybersecurity remediation processes across the banking sector.
According to sources familiar with the matter, several of the country’s largest financial institutions currently have access to Claude Mythos Preview through Anthropic’s Project Glasswing initiative. As banks analyze the findings, they are reportedly uncovering large numbers of previously low- or moderate-priority weaknesses that the AI system can chain together into higher-risk attack paths.
The vulnerabilities span both proprietary and open-source software, with older legacy systems drawing particular scrutiny because of outdated software support and slower patching cycles. Multiple sources said banks are now fixing vulnerabilities within days that previously may have remained unresolved for weeks.
The accelerated remediation effort is also creating operational pressure inside financial institutions. Sources said some banks may need to temporarily take systems offline more frequently to implement updates and security fixes, though institutions are attempting to minimize disruption for customers.
“This is a wake-up call because cyber risk is moving to machine speed, while much of bank defense still operates at human speed,” said Nitin Seth, co-founder and CEO of data and AI services firm Incedo.
Mythos has reportedly proven especially effective at identifying complex attack chains by linking together multiple seemingly minor weaknesses into broader exploitable vulnerabilities. One banking source described the system as forcing institutions into remediation timelines “never previously contemplated.”
Access to Mythos remains limited because of both safety concerns and infrastructure costs. Anthropic initially restricted availability to Project Glasswing partners and a small group of additional organizations. Banks reportedly using the system include JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley.
The rapid adoption of Mythos highlights how advanced AI systems are beginning to reshape cybersecurity operations inside highly regulated industries.
Unlike conventional vulnerability scanners, Mythos reportedly demonstrates stronger reasoning capabilities capable of connecting isolated weaknesses into realistic attack scenarios. Regulators and cybersecurity experts have increasingly warned that frontier AI systems could dramatically accelerate both cyber defense and cyber offense.
A senior banking regulatory official told Reuters the model had proven “as powerful as anticipated,” particularly in its ability to connect vulnerabilities that human analysts might take far longer to identify.
The pressure is especially acute for banks because financial systems often rely on decades-old infrastructure, proprietary software stacks, and interconnected legacy environments that are difficult to modernize quickly without operational risk.
High Costs Create Uneven Access To Frontier Cyber AI
One major challenge for smaller banks is the cost and infrastructure required to use frontier cybersecurity models effectively.
Anthropic prices Mythos at $25 per million input tokens and $125 per million output tokens, making it significantly more expensive than its widely available Claude Opus 4.7 model. Anthropic has said it will provide $100 million in credits to Project Glasswing participants and Mythos customers to support research-preview usage.
Cybersecurity firms involved in Project Glasswing said the model requires entirely new workflows and methodologies to operate effectively. Adam Meyers of CrowdStrike said his team spent an entire weekend developing processes for using Mythos before actively searching for vulnerabilities.
Anthropic has separately attempted to broaden defensive access through Claude Security and published recommendations for organizations without direct Mythos access. The company has also expanded enterprise cybersecurity offerings through its recently announced financial services AI platform and a separate $1.5 billion AI deployment venture backed by firms including Blackstone and Goldman Sachs aimed at helping organizations operationalize Claude-based systems.