Google Makes Holiday Shopping Easier with New AI Chat and Smart Checkout
Google is introducing new AI-powered shopping experiences, including conversational search, agentic checkout, and Duplex-based store calls, designed to make online shopping faster, easier, and more natural ahead of the holiday season.
By Daniel MercerEdited by Maria Konash
Published:
Updated:
Google adds conversational shopping and agentic checkout to Search and Gemini ahead of holidays.
Photo: Shutter Speed / unsplash.com
Just in time for the holiday rush, Google is rolling out a suite of AI-powered shopping tools aimed at modernizing the e-commerce experience. The update introduces conversational shopping in Google Search, Gemini app shopping features, agentic checkout, and even an AI calling tool that can contact local stores to check product availability.
“Shopping shouldn’t feel tedious — it should be natural, easy, and even fun,” said Vidhya Srinivasan, VP and GM of Ads and Commerce at Google. “We want to hold onto the discovery and browsing aspects, but skip the hard parts.”
With the new update, users can type or speak natural-language queries directly into AI Mode in Search — for instance, “Show me cozy sweaters in fall colors” or “Compare budget skincare brands.” Google’s Shopping Graph, which tracks 50 billion products (2 billion updated hourly), powers these responses, surfacing accurate prices, images, and reviews.
Gemini Becomes a Shopping Assistant
The Gemini app now supports richer, visual responses to shopping-related prompts, turning a simple question like “What’s a good budget home office setup?” into a visual shopping list with product suggestions.
For now, this enhancement is exclusive to U.S. users.
Google confirmed that sponsored listings appear within AI Mode in Search, but not yet in the Gemini mobile app while the feature remains experimental.
Agentic Checkout Comes to Google Search
Google’s agentic checkout is another step toward fully automated shopping. It lets users buy products directly through Google Search, handling everything from item selection to payment.
Currently, this feature is rolling out in the U.S. with partners like Wayfair, Chewy, Quince, and select Shopify stores.
AI That Calls Stores for You
One of the most futuristic updates is a Duplex-powered AI tool that can call local stores to check whether a product is in stock, its price, and any ongoing promotions.
Built on Google’s Duplex, Shopping Graph, and payments infrastructure, the tool operates like a personal assistant: you search for a product “near me,” tap “Let Google Call,” and the AI handles the rest.
It’s currently live in the U.S., starting with categories like toys, health & beauty, and electronics. Retailers can opt outof these calls, and Google ensures the assistant clearly identifies itself as AI before proceeding.
Behind the Scenes: Cloud and AI Infrastructure
These new consumer-facing AI capabilities rely heavily on Google’s Private AI Compute infrastructure, announced earlier this month.
The same Titanium Intelligence Enclaves (TIE) and TPU-based systems that power Gemini models enable secure, real-time personalization and privacy-protected computation — a clear example of Google fusing cloud infrastructure with AI commerce.
This shopping‑upgrade announcement builds on Google’s broader shift toward conversational and agentic AI across its ecosystem, including the rollout of Gemini for TV—which replaces the former Assistant experience on Google TV platforms and brings more natural, dialogue‑based interactions.Likewise, the introduction of Gemini for Home brings the next‑generation assistant to Nest speakers and smart displays, underpinning the shopping tools with deeper contextual understanding and voice‑based automation.
Daniel Mercer covers foundation models, generative AI systems, and applied machine learning deployments across enterprise and consumer platforms. He reports on model launches, performance benchmarks, inference pricing, and the competitive dynamics shaping leading AI labs. His work evaluates compute efficiency, data sourcing strategies, and integration pathways that determine scalability. Daniel takes a systems-level approach, connecting technical documentation and release notes to business outcomes and adoption signals. He frequently analyzes safety mechanisms, model limitations, and reliability claims through testing data and operational evidence. Based in San Francisco, he spends his free time studying urban design and cycling.
Google Explores SpaceX Deal For Orbital Data Centers
Google is reportedly in talks with SpaceX and other launch providers as it explores deploying orbital data centers under its Project Suncatcher initiative. The discussions reflect growing interest in space-based AI infrastructure and computing capacity.
By Olivia GrantEdited by Maria Konash
Published:
Google explores SpaceX launch deal for orbital AI data centers as Project Suncatcher targets 2027 prototypes. Image: ActionVance / Unsplash
Google is reportedly in talks with SpaceX over a potential rocket launch agreement tied to the company’s efforts to develop orbital data centers, according to a Wall Street Journal report citing people familiar with the discussions.
The report said Google is also holding conversations with other rocket-launch providers as it evaluates infrastructure options for deploying computing systems in space. The initiative is connected to Google’s previously disclosed Project Suncatcher program, which aims to research space-based data center technology and launch two prototype satellites by early 2027.
Project Suncatcher was first revealed in November as part of Google’s long-term exploration of alternative AI infrastructure systems. The project focuses on whether orbital computing platforms could eventually help address growing energy, cooling, and land constraints associated with terrestrial AI data centers.
A partnership with SpaceX would mark another instance of Elon Musk cooperating commercially with AI rivals he has publicly criticized in the past. Musk has repeatedly attacked Google’s AI strategy while simultaneously expanding his own AI infrastructure ambitions through xAI and SpaceX.
Space-Based Computing Gains Attention In AI Industry
The idea of orbital data centers has shifted from theoretical research toward early-stage infrastructure planning as AI companies search for ways to overcome physical limitations facing existing compute expansion.
Space-based infrastructure offers several potential advantages, including access to uninterrupted solar energy, reduced land and cooling constraints, and theoretically massive long-term compute scalability if launch costs continue declining.
However, major technical challenges remain, including radiation exposure, hardware reliability, maintenance logistics, latency management, and the economics of deploying large-scale compute systems into orbit.
AI Infrastructure Race Expands Beyond Earth
Competition in artificial intelligence is expanding into infrastructure ownership and compute deployment strategy rather than focusing solely on model development.
Last week, Anthropicsigned an agreement to access the full compute capacity of SpaceXAI’s Colossus 1 supercomputer facility in Memphis, adding more than 220,000 NVIDIA GPUs to support Claude training and inference workloads. The partnership also included discussions around developing multiple gigawatts of orbital compute infrastructure.
The move followed Musk’s decision to mergexAI directly into SpaceX under a new SpaceXAI structure combining AI models, compute infrastructure, and aerospace operations into a single organization. Analysts said the consolidation could give SpaceXAI a unique advantage if orbital AI infrastructure becomes commercially feasible in the coming years.
U.S. Banks Rush To Fix Vulnerabilities Found By Anthropic Mythos
Major U.S. banks are rapidly patching software vulnerabilities uncovered by Anthropic’s Mythos AI model as concerns grow over AI-driven cybersecurity risks. The system is reportedly identifying weaknesses and attack chains at speeds beyond traditional security workflows.
By Maria Konash
Published:
U.S. banks speed up software patching after Anthropic’s Mythos AI uncovers widespread cybersecurity vulnerabilities. Image: David Vincent / Unsplash
Major U.S. banks are racing to patch IT system vulnerabilities identified by Anthropic’s powerful Mythos AI model, triggering urgent software upgrades and faster cybersecurity remediation processes across the banking sector.
According to sources familiar with the matter, several of the country’s largest financial institutions currently have access to Claude Mythos Preview through Anthropic’s Project Glasswing initiative. As banks analyze the findings, they are reportedly uncovering large numbers of previously low- or moderate-priority weaknesses that the AI system can chain together into higher-risk attack paths.
The vulnerabilities span both proprietary and open-source software, with older legacy systems drawing particular scrutiny because of outdated software support and slower patching cycles. Multiple sources said banks are now fixing vulnerabilities within days that previously may have remained unresolved for weeks.
The accelerated remediation effort is also creating operational pressure inside financial institutions. Sources said some banks may need to temporarily take systems offline more frequently to implement updates and security fixes, though institutions are attempting to minimize disruption for customers.
“This is a wake-up call because cyber risk is moving to machine speed, while much of bank defense still operates at human speed,” said Nitin Seth, co-founder and CEO of data and AI services firm Incedo.
Mythos has reportedly proven especially effective at identifying complex attack chains by linking together multiple seemingly minor weaknesses into broader exploitable vulnerabilities. One banking source described the system as forcing institutions into remediation timelines “never previously contemplated.”
Access to Mythos remains limited because of both safety concerns and infrastructure costs. Anthropic initially restricted availability to Project Glasswing partners and a small group of additional organizations. Banks reportedly using the system include JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley.
The rapid adoption of Mythos highlights how advanced AI systems are beginning to reshape cybersecurity operations inside highly regulated industries.
Unlike conventional vulnerability scanners, Mythos reportedly demonstrates stronger reasoning capabilities capable of connecting isolated weaknesses into realistic attack scenarios. Regulators and cybersecurity experts have increasingly warned that frontier AI systems could dramatically accelerate both cyber defense and cyber offense.
A senior banking regulatory official told Reuters the model had proven “as powerful as anticipated,” particularly in its ability to connect vulnerabilities that human analysts might take far longer to identify.
The pressure is especially acute for banks because financial systems often rely on decades-old infrastructure, proprietary software stacks, and interconnected legacy environments that are difficult to modernize quickly without operational risk.
High Costs Create Uneven Access To Frontier Cyber AI
One major challenge for smaller banks is the cost and infrastructure required to use frontier cybersecurity models effectively.
Anthropic prices Mythos at $25 per million input tokens and $125 per million output tokens, making it significantly more expensive than its widely available Claude Opus 4.7 model. Anthropic has said it will provide $100 million in credits to Project Glasswing participants and Mythos customers to support research-preview usage.
Cybersecurity firms involved in Project Glasswing said the model requires entirely new workflows and methodologies to operate effectively. Adam Meyers of CrowdStrike said his team spent an entire weekend developing processes for using Mythos before actively searching for vulnerabilities.
Anthropic has separately attempted to broaden defensive access through Claude Security and published recommendations for organizations without direct Mythos access. The company has also expanded enterprise cybersecurity offerings through its recently announced financial services AI platform and a separate $1.5 billion AI deployment venture backed by firms including Blackstone and Goldman Sachs aimed at helping organizations operationalize Claude-based systems.
OpenAI Introduces Daybreak in Response to Anthropic’s Mythos Push
OpenAI has introduced Daybreak, a cybersecurity initiative designed to integrate AI-driven defense directly into software development workflows. The platform combines GPT-5.5 models, Codex Security, and partnerships with major security firms to automate vulnerability analysis and remediation.
By Marcus LeeEdited by Maria Konash
Published:
OpenAI launches Daybreak with GPT-5.5 and Codex Security to automate vulnerability detection and patching. Image: OpenAI
OpenAI has launched Daybreak, a cybersecurity initiative aimed at embedding AI-driven defense directly into software development and security operations workflows. The company said the platform combines its GPT-5.5 models, the Codex Security agent framework, and partnerships with major cybersecurity firms to help organizations identify, validate, and remediate vulnerabilities faster.
OpenAI described Daybreak as a system designed to move cybersecurity “from discovery to remediation” while integrating defensive intelligence into the software development process itself. Rather than focusing solely on finding vulnerabilities after deployment, the initiative aims to make software “resilient by design.”
The platform uses multiple AI models depending on workflow sensitivity. GPT-5.5 will support general development and analysis tasks, while GPT-5.5 with Trusted Access for Cyber is intended for verified defensive security operations such as secure code review, malware analysis, vulnerability triage, patch validation, and detection engineering.
OpenAI also introduced GPT-5.5-Cyber, a more permissive version intended for specialized authorized workflows including penetration testing, controlled validation, and red teaming activities under stricter verification and account-level controls.
At the center of the initiative is Codex Security, an agentic cybersecurity system capable of scanning repositories, building editable threat models, identifying realistic attack paths, validating high-risk findings, generating patches, and testing fixes directly inside codebases.
In one demonstration, OpenAI showed Codex Security scanning a software repository, prioritizing exploitable vulnerabilities, generating remediation patches, and returning audit-ready evidence documenting the fixes.
The company said Daybreak is designed to reduce vulnerability analysis workflows from hours to minutes while improving prioritization of high-impact security issues and lowering token usage costs during large-scale code analysis.
OpenAI Expands Its Cybersecurity Push
The launch positions OpenAI more directly against Anthropic in the growing market for AI-driven cybersecurity systems.
Anthropic’s Claude Mythos Preview model previously drew attention after reportedly helping identify and patch 271 vulnerabilities in the Firefox browser alone. That announcement intensified concerns in Washington and across the cybersecurity industry about increasingly capable AI systems discovering exploitable software weaknesses faster than organizations can fix them.
Unlike some AI-assisted security tools focused primarily on vulnerability detection, OpenAI said Daybreak is intended to integrate remediation directly into development pipelines through continuous patch validation, secure code review, and automated remediation workflows.
The company emphasized that stronger cyber capabilities also require stricter safeguards. OpenAI said Daybreak combines expanded defensive capabilities with verification systems, monitoring controls, proportional safeguards, and accountability mechanisms intended to limit misuse.
Security Firms And Governments Prepare For AI-Native Defense
OpenAI is launching Daybreak alongside partnerships with several major cybersecurity and infrastructure companies, including Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, Oracle, Akamai Technologies, Fortinet, and Zscaler.
“We’re excited about the potential of OpenAI’s cyber capabilities to bring stronger reasoning and more agentic execution into security workflows,” said Cloudflare CTO Dane Knecht. “It’s a big step forward for teams to be able to leverage frontier models not only to accelerate velocity, but also to improve their security posture.”
The initiative also comes as governments and regulators increasingly focus on AI-powered cyber capabilities following warnings around advanced systems such as Anthropic’s Mythos. Earlier this year, OpenAI separately announced plans to provide European institutions with access to GPT-5.5-Cyber under its broader EU Cyber Action Plan as policymakers intensify oversight of frontier AI security models.
SoftBank Injects $457 Million Into British AI Chipmaker
SoftBank has invested more than $450 million into Graphcore as the Japanese technology group expands its AI infrastructure and semiconductor ambitions. The funding follows SoftBank’s acquisition of the British AI chip company in 2024.
By Olivia GrantEdited by Maria Konash
Published:
SoftBank invests $457M in Graphcore to expand AI chip and infrastructure efforts. Image: Vishnu Mohanan / Unsplash
SoftBank Group has injected more than $450 million into British AI chip company Graphcore as the Japanese technology conglomerate accelerates investments in artificial intelligence infrastructure and semiconductor development.
According to a filing with the UK’s Companies House, Graphcore issued a single share valued at approximately $457 million on April 10. A Graphcore spokesperson confirmed the funding came from SoftBank. Sources familiar with the arrangement told CNBC the investment represents only part of the capital Graphcore is expected to receive from SoftBank this year.
SoftBank acquired Graphcore in 2024 after the UK startup struggled to compete commercially against dominant AI chip suppliers such as Nvidia. Before the acquisition, Graphcore had raised hundreds of millions of dollars and was once positioned as a potential challenger in the rapidly expanding AI accelerator market.
At the time of the acquisition, SoftBank said Graphcore would help support its broader ambitions around artificial general intelligence development. The company has since become part of SoftBank’s growing portfolio of AI infrastructure and semiconductor assets.
The new funding comes as SoftBank sharply increases spending across AI hardware, compute infrastructure, and data center projects. The company is involved in the $500 billion Stargate AI infrastructure initiative alongside OpenAI and Oracle, while also pursuing additional semiconductor and robotics investments globally.
SoftBank founder and CEO Masayoshi Son previously described Graphcore as “a company with deep expertise in chip design,” adding that the acquisition strengthened SoftBank’s semiconductor strategy alongside chip architecture company Arm Holdings.
Graphcore has also expanded internationally since the acquisition. In October, the company announced plans to invest up to £1 billion into a new AI campus in Bengaluru, India, focused on AI, silicon engineering, software, and systems development.
SoftBank Expands Its AI Infrastructure Strategy
The Graphcore funding highlights SoftBank’s broader effort to build an integrated AI infrastructure ecosystem spanning semiconductors, compute, robotics, and large-scale data centers.
Over the past two years, SoftBank has aggressively repositioned itself around AI after previously focusing heavily on venture capital investments through the Vision Fund. The company has since shifted toward owning strategic infrastructure assets directly involved in AI model training and deployment.
In addition to Graphcore and Arm, SoftBank also acquired silicon design company Ampere Computing in 2025. Reports have additionally indicated the company is exploring major AI data center projects in Europe, including a potential $100 billion investment in AI infrastructure in France following discussions with Emmanuel Macron, while also considering a standalone AI and robotics business listing in the United States.
Competition For AI Chips Intensifies
The investment also reflects increasing competition in AI semiconductors as companies seek alternatives to Nvidia’s dominant position in the market for AI accelerators.
While Graphcore struggled to achieve broad commercial adoption independently, SoftBank appears to view the company’s chip architecture and engineering expertise as strategically valuable for future AI systems and infrastructure deployments.
Demand for AI compute hardware has surged globally alongside the rapid expansion of generative AI models and large-scale enterprise AI workloads. That growth has pushed technology companies and investors to secure access not only to chips, but also to energy, networking infrastructure, manufacturing capacity, and advanced semiconductor design talent.
For SoftBank, strengthening Graphcore may provide another pathway to participate directly in the long-term buildout of AI infrastructure rather than relying solely on minority investments in external AI companies.
Thinking Machines Introduces AI Models for Live Multimodal Collaboration
Thinking Machines Labs introduced a research preview of “interaction models” designed for continuous real-time collaboration across audio, video, and text. The system combines live multimodal interaction with asynchronous reasoning and tool use.
Thinking Machines Labs introduced a research preview of what it calls “interaction models,” a new class of AI systems designed to collaborate with users continuously across audio, video, and text rather than through traditional turn-based prompts.
The company said the models are trained from scratch to support real-time interaction, allowing users and AI systems to speak, interrupt, observe, respond, and work simultaneously. The architecture is built around “micro-turns” that process roughly 200 milliseconds of input and output at a time, enabling continuous two-way interaction instead of waiting for users to finish speaking or typing before responding.
According to Thinking Machines, the system combines a real-time interaction model with a separate asynchronous background model responsible for longer reasoning tasks, tool use, browsing, and workflow execution. The interaction layer remains active throughout the process while integrating results from the background model as they arrive.
The company argued that current AI systems create a “collaboration bottleneck” because most models operate through rigid turn-taking interfaces that limit human involvement during reasoning and execution. Thinking Machines said its approach aims to make AI collaboration function more like natural human conversation.
The research preview demonstrates several capabilities that are difficult to achieve in standard voice assistants or multimodal chat systems. These include simultaneous speech between user and model, proactive verbal and visual interjections, continuous visual monitoring, real-time translation, concurrent tool use during conversations, and direct awareness of elapsed time.
For example, the company showed scenarios where the model corrected spoken language mistakes while users continued speaking, counted physical exercises through live video streams, reacted to coding errors as they appeared onscreen, and performed live multilingual translation without pausing conversations.
Interaction Becomes A Core AI Capability
The announcement reflects a broader shift in AI development toward systems optimized for continuous collaboration rather than isolated prompt-response exchanges.
Most current real-time AI products rely on external orchestration layers such as voice activity detection systems and separate dialogue managers to simulate interactivity. Thinking Machines argues those approaches create limitations because the intelligence governing interruptions, timing, and conversational flow exists outside the model itself.
Instead, the company embedded interaction directly into model training and architecture. That allows responsiveness, interruption handling, simultaneous speaking, and multimodal awareness to improve alongside overall model capability as systems scale.
The architecture also differs from many multimodal systems by minimizing reliance on large standalone audio or video encoders. Audio, video, and text are processed together through shared transformer infrastructure using lightweight embedding layers and early fusion techniques.
Benchmarks Highlight Speed And Responsiveness
Thinking Machines said its TML-Interaction-Small model achieved stronger combined responsiveness and interaction quality than several existing commercial realtime AI systems across internal and public benchmarks.
The company highlighted improvements in latency, interruption handling, simultaneous conversation, proactive responses, and continuous multimodal awareness. Internal evaluations also tested capabilities that many current voice models cannot reliably perform, including reacting to visual changes without explicit prompts and speaking concurrently with users during live tasks.
The released model is currently a 276-billion-parameter mixture-of-experts system with 12 billion active parameters at runtime. Thinking Machines said larger interaction models are already pretrained but remain too computationally expensive for low-latency deployment today.
The company added that future work will focus on longer session memory management, infrastructure optimization, safety research for realtime multimodal interaction, and deeper coordination between interactive and background reasoning systems.
The announcement also follows a recently expanded partnership between NVIDIA and Thinking Machines Labs to deploy next-generation Vera Rubin AI systems for frontier model training.