OpenAI Revamps ChatGPT Shopping After Checkout Pivot

OpenAI has overhauled ChatGPT’s shopping features, focusing on product discovery after scaling back its Instant Checkout initiative. The update introduces visual browsing and merchant integrations.

By Samantha Reed Edited by Maria Konash Published:
OpenAI Revamps ChatGPT Shopping After Checkout Pivot
OpenAI revamps ChatGPT shopping with visual discovery and deeper merchant integrations. Image: OpenAI

OpenAI is rolling out a redesigned shopping experience in ChatGPT, shifting its strategy toward product discovery after scaling back its earlier Instant Checkout feature.

The update allows users to search for products by describing what they need or uploading images for reference. ChatGPT then generates visual results that can be compared side by side, with details such as pricing, features, and reviews integrated into the interface.

OpenAI said the system has been improved in terms of speed, relevance, and product coverage, enabling more accurate and up-to-date results. The goal is to simplify a process that often requires users to navigate multiple websites and sources before making a decision.

Pivot Away From In-Chat Transactions

The redesign follows OpenAI’s decision to move away from Instant Checkout, a feature launched last year that enabled users to complete purchases directly within ChatGPT. The company initially positioned the tool as a key step toward AI-driven commerce.

However, the feature faced challenges. Analysts noted difficulties in onboarding merchants, maintaining accurate product data, and supporting common e-commerce functions such as multi-item carts and loyalty integrations.

OpenAI acknowledged these limitations, stating that the checkout model did not provide the flexibility required for a broad retail ecosystem. Instead, the company is now focusing on helping users discover products while allowing merchants to retain control over transactions.

Under the new approach, purchases are completed through external merchant platforms, often via in-app browsers or dedicated integrations.

Expanding Merchant Ecosystem

The updated experience is supported by deeper integration with retailers, who can now provide product feeds and promotional data directly to ChatGPT. This ensures their offerings are fully represented in search results.

Major brands including Target, Sephora, and Nordstrom have already adopted the new system. Additional integrations allow companies to build custom applications within ChatGPT, giving them more control over the user experience and transaction flow.

Walmart has introduced a dedicated in-ChatGPT shopping interface that supports account linking, loyalty programs, and payments. Meanwhile, Shopify is expanding its role by enabling merchants to connect storefronts to its catalog and complete purchases through embedded browsing experiences.

Shopify is also launching a new service called Agentic Plan, designed to help merchants without existing storefronts surface their products across AI platforms, including ChatGPT and Google Gemini.

The shift reflects a broader trend toward “agentic commerce,” where AI systems assist users in navigating complex purchasing decisions rather than handling transactions directly.

OpenAI’s updated strategy positions ChatGPT as a discovery layer within the retail ecosystem, focusing on intent generation and product comparison. As AI becomes more integrated into shopping workflows, the balance between platform control and merchant autonomy is likely to remain a key area of development.

AI & Machine Learning, Consumer Tech, News

Anthropic’s Claude Can Now Complete Tasks on Your Computer

Anthropic has upgraded Claude with the ability to control a user’s computer and complete tasks autonomously. The move intensifies competition in the fast-growing AI agent market.

By Daniel Mercer Edited by Maria Konash Published:
Anthropic’s Claude Can Now Complete Tasks on Your Computer
Anthropic gives Claude computer control, enabling autonomous tasks amid rising AI competition. Image: Andras Vas / Unsplash

Anthropic has introduced new capabilities for its Claude AI model that allow it to control a user’s computer and execute tasks autonomously, marking a significant step toward fully agentic systems.

The update enables users to assign tasks to Claude remotely, including from a mobile device, with the AI carrying out actions directly on a connected computer. According to the company, Claude can open applications, navigate web browsers, and interact with files such as spreadsheets.

In one demonstration, a user asked Claude to export a presentation as a PDF and attach it to a meeting invitation. The system completed the multi-step workflow without manual intervention, highlighting the model’s ability to operate across software environments.

Push Toward Always-On AI Agents

The release reflects a broader shift in the AI industry toward building agents capable of acting independently on behalf of users. These systems are designed to handle complex, multi-step tasks and remain active even when users are not directly engaged.

Anthropic’s update follows growing interest in agent-based tools such as OpenClaw, which gained traction for enabling users to interact with AI through messaging platforms like WhatsApp and Telegram. OpenClaw operates locally on devices, allowing access to files and applications, a model that Anthropic is now partially replicating.

Competition in this space is intensifying. Nvidia recently introduced an enterprise-focused version of OpenClaw, while OpenAI has recruited its creator as part of efforts to develop next-generation personal AI agents.

Anthropic’s new feature builds on its broader ecosystem, including tools like Dispatch, which allows continuous interaction with Claude across devices and supports task assignment in an ongoing workflow.

Safeguards and Early Limitations

Anthropic emphasized that the computer control capability is still in an early stage compared to Claude’s performance in coding and text-based tasks. The company acknowledged that the system can make mistakes and that security risks remain a concern.

To address these issues, Claude requires user permission before accessing new applications or executing certain actions. The company said it has implemented safeguards to reduce potential misuse, though it noted that threats continue to evolve.

The introduction of computer control highlights both the potential and the challenges of agentic AI. While the technology promises increased productivity and automation, it also raises questions about reliability, security, and user oversight.

AI & Machine Learning, News

Alibaba Unveils New AI Chip Built for the Next Generation of AI Agents

Alibaba has unveiled a new CPU designed for AI agents, focusing on inference and customizable workloads. The chip reflects China’s push to build domestic AI infrastructure.

By Olivia Grant Edited by Maria Konash Published:
Alibaba Unveils New AI Chip Built for the Next Generation of AI Agents
Alibaba launches XuanTie C950 CPU to power AI agents and boost inference performance. Image: Ban Daisy / Unsplash

Alibaba has introduced a new processor aimed at supporting artificial intelligence agents, marking the latest step in the company’s effort to expand its semiconductor capabilities and reduce reliance on foreign technology.

The chip, called XuanTie C950, is a central processing unit designed to handle inference workloads, the stage where trained AI models are deployed to perform real-world tasks. Unlike graphics processing units, which dominate AI model training, CPUs are critical for executing sequential operations and managing multi-step processes typical of agent-based systems.

Alibaba said the XuanTie C950 is optimized for “agentic” AI, referring to systems capable of autonomously completing tasks on behalf of users. These systems often require coordination across multiple steps, making CPU performance and flexibility a key factor.

Focus on Inference and Customization

The chip was developed by Alibaba’s DAMO Academy and is built on the open-source RISC-V architecture. Unlike proprietary designs such as those from Arm, RISC-V allows companies to customize processor designs without paying licensing fees, offering greater flexibility and potential cost advantages.

Alibaba said the XuanTie C950 can be tailored for specific inference patterns, enabling customers to optimize performance for particular use cases. According to the company, this customization delivers more than a 30% performance improvement compared to some mainstream CPU products.

The focus on inference reflects a broader shift in AI infrastructure. As models move from training to deployment, demand is increasing for hardware that can efficiently run AI applications at scale. CPUs play a central role in orchestrating these workloads, particularly in enterprise and cloud environments.

Alibaba plans to deploy the chip within its own data centers rather than sell it directly. The company’s strategy is to integrate the hardware into its cloud services, offering AI capabilities to customers through its platform.

Strategic Push for Semiconductor Independence

The launch is part of Alibaba’s broader effort to strengthen its semiconductor ecosystem through its T-Head division. Earlier this year, the company introduced another AI-focused chip, the Zhenwu 810E, as it continues to build out its hardware stack.

China’s technology sector has increasingly prioritized domestic chip development amid ongoing U.S. export restrictions that limit access to advanced semiconductors, particularly high-performance GPUs. These constraints have accelerated investment in alternative architectures and in-house designs.

Analysts note that while the XuanTie C950 may not immediately drive significant revenue growth, it plays an important role in improving supply chain resilience and reducing dependency on external suppliers.

The chip also highlights a growing diversification in AI hardware. While GPUs remain dominant for training large models, CPUs and other specialized processors are becoming more important as AI systems evolve toward agent-based workflows.

Alibaba’s latest release underscores how major technology companies are expanding beyond software to develop the infrastructure needed to support the next generation of AI applications.

AI That Thinks Like a Human Might Already Be Here, Says Nvidia CEO

Nvidia CEO Jensen Huang claims AGI may already exist, reigniting debate over how artificial general intelligence is defined and measured across the industry.

By Samantha Reed Edited by Maria Konash Published:
AI That Thinks Like a Human Might Already Be Here, Says Nvidia CEO
Jensen Huang says AGI may already exist, fueling debate over what defines human-level AI. Image: Steve Johnson / Unsplash

Debate over the arrival of artificial general intelligence, or AGI, is intensifying as industry leaders offer increasingly divergent definitions of the milestone.

Nvidia CEO Jensen Huang recently said he believes AGI has already been achieved, a claim that underscores how flexible interpretations of the concept have become. His comments came during a discussion about the future of AI systems and their capabilities.

AGI is generally understood as a form of artificial intelligence that can perform tasks at a level comparable to human intelligence across a wide range of domains. However, there is no universally accepted definition, allowing companies and researchers to apply different benchmarks.

Competing Definitions of AGI

Huang has previously defined AGI as software capable of passing tests that approximate general human intelligence at a competitive level. Under that framework, he suggested in 2023 that such systems could emerge within five years.

In more recent remarks, Huang appeared to adopt a broader interpretation. When asked whether AI could build and run a billion-dollar company, he suggested that the threshold for AGI may already have been met.

His argument rests on a narrower interpretation of success, focusing on the ability of AI systems to generate significant economic value, even if only temporarily. This contrasts with more traditional definitions that emphasize sustained autonomy, reasoning, and adaptability across complex real-world scenarios.

The discussion reflects a broader trend in the AI industry, where the definition of AGI is often shaped by context, incentives, and technological progress.

Industry Pressures and Expectations

The debate comes at a time when leading AI companies are investing heavily in infrastructure, research, and product development. These efforts have driven rising costs and increased pressure to demonstrate tangible progress toward advanced capabilities.

AGI has become a central concept in this narrative, often used to frame long-term goals and justify large-scale investment. However, the lack of a clear definition makes it difficult to measure progress or compare claims across organizations.

Some analysts argue that current AI systems, including agent-based tools capable of executing complex workflows, represent meaningful steps toward general intelligence. Others maintain that these systems remain specialized and dependent on human oversight.

The emergence of autonomous agents has added complexity to the discussion. These systems can perform multi-step tasks and interact with software environments, but they do not yet exhibit the full range of cognitive abilities associated with human intelligence.

As a result, the question of whether AGI has been achieved remains largely philosophical. The answer depends less on technological breakthroughs and more on how the concept itself is defined.

The ongoing debate highlights a key challenge for the AI industry: aligning expectations, definitions, and technical reality as development accelerates. Until a consensus emerges, claims about AGI are likely to remain contested, reflecting both genuine progress and differing interpretations of what constitutes general intelligence.

AI & Machine Learning, News

Nvidia’s New AI Model Can Generate Human and Robot Movement from Text

Nvidia has unveiled Kimodo, a motion diffusion model that generates high-quality human and robot movements from text and constraints using large-scale motion capture data.

By Ethan Caldwell Edited by Maria Konash Published: Updated:
Nvidia’s New AI Model Can Generate Human and Robot Movement from Text
Nvidia unveils Kimodo, a motion diffusion AI model generating 3D human and robot motion from text. Image: Possessed Photography / Unsplash

Nvidia has introduced Kimodo, a new artificial intelligence model designed to generate high-quality 3D motion for humans and robots using text prompts and kinematic constraints. The system represents a step forward in motion synthesis, an area increasingly important for robotics, simulation, and digital content creation.

The model, trained on approximately 700 hours of optical motion capture data, reflects a broader push to scale training datasets in order to improve realism and control. Publicly available motion capture datasets have historically been limited in size, constraining the performance of earlier generative models.

Kimodo builds on this by enabling motion generation directly from natural language descriptions. Users can input prompts to create animations of human movement, reducing the need for manual animation or motion capture sessions. The system can also interpret how robotic structures move, including platforms such as the Unitree G1 humanoid robot, allowing developers to generate motion instructions for machines without relying on human operators.

Flexible Control Through Text and Constraints

In addition to text prompts, Kimodo supports a wide range of kinematic constraints. These include full-body keyframes, joint-level positioning and rotation, as well as two-dimensional waypoints and motion paths.

This flexibility allows developers to guide motion generation at different levels of detail, from general behavioral descriptions to precise physical positioning. The model’s architecture incorporates a two-stage denoising process, separating root motion from body movement, which helps reduce artifacts and improve consistency.

The system’s motion representation is designed to handle diverse input types, enabling it to adapt across use cases in both digital and physical environments. Nvidia said its experiments show that scaling both dataset size and model complexity leads to measurable improvements in motion quality and control accuracy.

Applications Across Robotics and Media

High-quality motion generation has applications across robotics, gaming, film production, and simulation. In robotics, it can accelerate training and deployment by providing synthetic motion data and control instructions. In media, it can streamline animation workflows and reduce production costs.

Kimodo’s ability to generate both human-like motion and robot-specific movement highlights the convergence between AI-driven simulation and real-world automation. By bridging these domains, the model could support more advanced human-robot interaction and autonomous systems.

Nvidia has made a demo of Kimodo available through a public interface, though access may be limited due to demand. The release underscores the company’s continued investment in applying generative AI to physical systems, extending beyond text and images into movement and control.

AI & Machine Learning, News, Robotics & Automation

Tesla and SpaceX Join Forces on ‘Terafab’ AI Chip Factory

Elon Musk announced plans for Terafab, a dual chip factory project by Tesla and SpaceX to produce AI chips for vehicles, robots, and space-based data centers.

By Olivia Grant Edited by Maria Konash Published:
Tesla and SpaceX Join Forces on ‘Terafab’ AI Chip Factory
Elon Musk plans Terafab AI chip factories in Texas to power Tesla, SpaceX, and future space computing. Image: Manuel / Unsplash

Elon Musk has announced plans for a new semiconductor manufacturing initiative called Terafab, a large-scale facility in Austin, Texas, that will produce advanced chips for Tesla and SpaceX.

The project will consist of two dedicated fabrication plants, each focused on a single chip design. One factory will produce chips for Tesla’s electric vehicles and its Optimus humanoid robots, while the second will develop specialized processors for artificial intelligence systems operating in space.

Musk said the initiative is driven by growing demand for computing power across his companies. He noted that existing global chip production is insufficient to meet future requirements, particularly as AI applications expand.

“We either build the Terafab or we don’t have the chips,” Musk said during a presentation in Austin, emphasizing the strategic importance of vertical integration in semiconductor supply.

Expanding AI Infrastructure Beyond Earth

A key aspect of the project is the development of chips designed specifically for space-based AI systems. These processors would be used in satellites and other orbital infrastructure, where environmental conditions such as temperature and radiation differ significantly from terrestrial data centers.

Musk said the space-focused chips will need to operate reliably under harsher conditions, including higher temperatures. The effort aligns with SpaceX’s broader ambitions to expand computing capabilities beyond Earth, potentially supporting AI-driven services in orbit.

The Terafab facility is expected to eventually produce one terawatt of computing capacity annually. By comparison, current total U.S. computing output is estimated at roughly half that level, according to Musk.

The announcement also marks a closer integration between Tesla, SpaceX, and Musk’s artificial intelligence company xAI, which recently merged with SpaceX. The collaboration suggests a coordinated strategy to build end-to-end AI infrastructure spanning hardware, software, and deployment environments.

Supply Chain Pressures and Industry Context

Musk acknowledged existing semiconductor partners, including Samsung, TSMC, and Micron, but indicated that reliance on external suppliers may not be sufficient as demand for AI chips accelerates.

The move reflects a broader trend among technology companies seeking greater control over critical components. As AI workloads grow more complex, demand for specialized chips has surged, prompting firms to invest directly in design and manufacturing capabilities.

However, building semiconductor fabrication facilities is capital-intensive and technically challenging. Projects often require years of development and face risks related to cost overruns, supply chain constraints, and technological complexity.

Musk did not provide a timeline for Terafab, and his history of ambitious announcements has included delays in past initiatives. Still, the proposal underscores the increasing importance of custom silicon in AI development.

AI & Machine Learning, Cloud & Infrastructure, News