Amazon Launches Alexa for Shopping AI Assistant Across Its Store

Amazon has launched Alexa for Shopping, a generative AI assistant that combines Alexa+, Rufus, and customer shopping history to deliver personalized product recommendations, price tracking, and automated purchasing. The assistant is available across Amazon’s app, website, and Echo Show devices.

By Samantha Reed Edited by Maria Konash Published:
Amazon launches Alexa for Shopping with AI search, price tracking, automated purchases, and personalized guides. Image: Rubaitul Azad / Unsplash

Amazon has introduced Alexa for Shopping, a new AI-powered shopping assistant designed to combine conversational AI, product expertise, and customer shopping history into a unified retail experience across Amazon’s app, website, and Echo Show devices.

The launch merges capabilities from Alexa+ and Amazon’s Rufus shopping assistant, which the company said helped more than 300 million customers research and compare products in 2025. Alexa for Shopping is now integrated directly into Amazon’s main search bar, allowing users to ask conversational questions, compare products, track orders, generate shopping guides, and automate purchases using natural language.

Amazon said the assistant continuously personalizes recommendations using browsing activity, purchase history, preferences, and conversations across Alexa-enabled devices. The company described the system as a persistent shopping layer that carries context between devices and sessions instead of resetting interactions each time a customer searches.

The assistant can create AI-generated category overviews, compare products side-by-side from search results, surface one-year price history charts, and automatically monitor products for price drops. Customers can also create “Scheduled Actions” that automate recurring shopping tasks such as replenishing household items, tracking book releases, or adding products to carts when prices reach specific targets.

Amazon is also expanding agentic shopping capabilities through Shop Direct and its “Buy for Me” feature. The system can discover products from external retailers and, for eligible items, complete purchases automatically using stored payment and shipping information.

The company said Alexa for Shopping can also generate personalized shopping guides for complex purchases such as laptops, TVs, or appliances by summarizing reviews, features, pricing differences, and category insights across Amazon and the broader web.

In addition to mobile and desktop support, Amazon is bringing the full Amazon storefront experience to Echo Show devices for the first time. Customers can browse and purchase products using voice commands, touch controls, or a combination of both.

Alexa for Shopping is rolling out to all U.S. customers this week and does not require a Prime membership, Alexa app subscription, or Echo device.

Amazon Pushes AI Deeper Into Commerce Automation

The launch marks one of Amazon’s most aggressive attempts yet to transform e-commerce from search-based navigation into AI-assisted decision making and task automation.

Rather than relying on keyword searches and static filters, Alexa for Shopping is designed to function as a persistent shopping assistant that remembers preferences, previous conversations, recurring purchases, family information, and shopping behavior across Amazon’s ecosystem.

Features such as automated cart-building, conversational product research, and price-triggered purchases move Amazon closer to agentic commerce systems where AI actively manages portions of the shopping process on behalf of users.

The integration of Rufus product intelligence with Alexa+ personalization also gives Amazon a broader contextual data advantage across retail, smart home devices, and media services.

AI & Machine Learning, Consumer Tech, News

Sam Altman Says Elon Musk Abandoned OpenAI During Trial

OpenAI CEO Sam Altman testified that Elon Musk abandoned the company during a critical funding period rather than being pushed out of a nonprofit mission. The testimony came during the ongoing Musk v. Altman trial over OpenAI’s corporate structure and commercialization.

By Samantha Reed Edited by Maria Konash Published:
Sam Altman says Elon Musk abandoned OpenAI as trial over its nonprofit origins and commercialization intensifies. Image: Sandra Dempsey / Unsplash

Sam Altman testified Tuesday that Elon Musk abandoned OpenAI during a crucial period in the company’s development, rejecting Musk’s claims that OpenAI improperly transformed itself away from its nonprofit mission.

Speaking for roughly four hours in federal court in Oakland, California, Altman told jurors that Musk failed to follow through on commitments to support the company financially as OpenAI struggled to secure the computing resources needed to compete in artificial intelligence research.

“We were kind of left for dead,” Altman testified.

Musk sued OpenAI, Altman, and OpenAI President Greg Brockman in 2024, alleging the company violated its founding principles by shifting toward commercial operations and pursuing profits rather than operating solely for charitable purposes. Musk argues that the roughly $38 million he contributed to OpenAI was used for unauthorized commercial expansion.

Altman disputed that claim in court, saying he never promised Musk that OpenAI would permanently maintain a nonprofit-only structure.

Much of the trial has focused on internal negotiations in 2017 and 2018 involving Altman, Musk, Brockman, and co-founder Ilya Sutskever over how to finance increasingly expensive AI development. According to testimony, OpenAI leaders debated several possible corporate structures, including for-profit models, as they sought billions of dollars in computing and infrastructure funding.

Those discussions ultimately collapsed, and Musk left OpenAI’s board in February 2018.

Altman testified that Musk’s departure created uncertainty inside OpenAI, with employees worrying about how the organization would survive financially. He also said some researchers viewed Musk’s exit as a morale improvement due to dissatisfaction with his management style.

“I don’t think Mr. Musk understood how to run a good research lab,” Altman told the court.

Court filings and testimony also revealed that Musk continued corresponding with OpenAI leadership after leaving the board. In one 2018 email presented during testimony, Musk wrote that OpenAI had “0%” chance of competing with Google DeepMind without dramatically increasing resources and spending billions annually.

Altman said the message remained “burned into my memory.”

OpenAI’s Origins Face Unprecedented Scrutiny

The trial has become one of the most consequential legal disputes in the AI industry because it directly examines how OpenAI evolved from a nonprofit research lab into one of the world’s most valuable private technology companies.

Musk argues OpenAI abandoned its original public-interest mission in favor of commercial expansion tied closely to Microsoft and large-scale investor funding. OpenAI has countered that evolving its structure was necessary to finance advanced AI development and compete against heavily funded rivals.

Testimony from current and former executives has exposed years of internal disagreements over governance, funding, safety priorities, and control of increasingly powerful AI systems.

Earlier in the trial, Sutskever testified that he had previously gathered evidence alleging Altman showed a “consistent pattern of lying” before Altman’s temporary removal as CEO in 2023. The court also heard testimony about previously undisclosed discussions involving a potential merger between OpenAI and rival Anthropic after Altman’s brief ouster.

AI Governance and Corporate Control Move into Public View

Beyond the personal conflict between Musk and Altman, the case highlights broader tensions shaping the AI industry as companies balance nonprofit ideals, investor demands, infrastructure costs, and control over frontier AI systems.

The enormous computing requirements associated with advanced AI development have pushed leading labs toward increasingly commercial models and deep partnerships with cloud providers and investors. At the same time, regulators, policymakers, and courts are beginning to examine how these organizations govern technologies that could have major economic and national security implications.

The outcome of the trial could influence how future AI companies structure governance, investor oversight, and nonprofit commitments as the industry continues consolidating around a small group of heavily capitalized firms.

AI & Machine Learning, News, Regulation & Policy

Anthropic Now Beats OpenAI in Enterprise Adoption

Anthropic has surpassed OpenAI in verified business customer adoption for the first time, according to new data from fintech firm Ramp. The shift highlights Anthropic’s growing traction among enterprise and technical customers.

By Samantha Reed Edited by Maria Konash Published:
Anthropic surpasses OpenAI in verified business adoption as enterprise demand for Claude accelerates. Image: Anthropic

Anthropic has overtaken OpenAI in verified business customer adoption for the first time, according to new data from fintech platform Ramp.

Ramp’s latest AI Index, which analyzes expense data from more than 50,000 companies using its payment and finance platform, found that 34.4% of participating businesses are now paying for Anthropic services, compared with 32.3% for OpenAI. It marks the first time Anthropic has held the top position in the survey.

According to Ramp economist Ara Kharazian, Anthropic had already established a lead among highly technical industries including finance, technology, and professional services before expanding into broader enterprise categories.

The data also illustrates how rapidly Anthropic has grown over the past year. In May 2025, only 9% of surveyed businesses were paying for Anthropic products. That figure has since increased by 26 percentage points over a 12-month period. During the same timeframe, OpenAI’s business adoption share declined by roughly 1%, while overall enterprise adoption of AI products across the survey increased by 9%.

Kharazian said Anthropic’s strategy of initially focusing on technical customers and developer-oriented use cases helped establish stronger traction within enterprise environments before broader expansion through products such as Claude Cowork.

The findings align with broader market signals suggesting growing enterprise adoption of Claude models. OpenRouter usage rankings, which track another segment of AI users and developers, last showed OpenAI ahead of Anthropic in December 2025.

Ramp noted that the index is not a complete representation of the overall AI market because it only reflects companies using its platform. However, the dataset remains one of the largest publicly discussed indicators of verified commercial AI spending activity.

Enterprise AI Competition Shifts Toward Deployment And Reliability

The report highlights how competition between leading AI companies is increasingly being shaped by enterprise deployment rather than consumer visibility alone.

Anthropic has spent much of the past year expanding its presence in regulated industries and enterprise workflows, particularly in finance, cybersecurity, operations, and software development. Its Claude family of models has gained traction among businesses seeking longer context handling, coding assistance, and AI agents designed for workplace tasks.

The company has also aggressively expanded enterprise infrastructure and partnerships in recent months, including new deployment initiatives, financial services AI agents, cybersecurity tools, and large-scale compute agreements aimed at supporting business demand.

Meanwhile, OpenAI continues to maintain a dominant consumer footprint through ChatGPT while simultaneously pushing deeper into enterprise deployments through consulting partnerships, deployment services, and productivity integrations.

AI & Machine Learning, Enterprise Tech, News

Google Introduces Googlebook Laptops Built Around Gemini AI

Google has unveiled Googlebook, a new laptop category combining Android and ChromeOS technologies with Gemini AI integrated throughout the system. The devices introduce features such as Magic Pointer contextual actions and AI-generated desktop widgets.

By Daniel Mercer Edited by Maria Konash Published:

Google has introduced Googlebook, a new category of laptops designed around Gemini AI and built using a combination of Android and ChromeOS technologies. The company said the devices are intended to shift laptops “from an operating system to an intelligence system,” with AI integrated directly into navigation, multitasking, and desktop interaction.

Googlebook devices run on Android 17 with a redesigned laptop-style interface while retaining integration with Google services and Chrome browsing capabilities. The company described the platform as a fusion of Android’s application ecosystem and ChromeOS infrastructure, optimized for Gemini-powered workflows and cross-device continuity.

One of the central features is Magic Pointer, a new cursor system developed with Google DeepMind. When users move the cursor over content, Gemini can suggest contextual actions automatically. For example, pointing at a date inside an email can trigger meeting creation, while selecting multiple images can generate AI-assisted visual compositions such as virtual furniture placement or outfit previews.

Google is also introducing “Create your Widget,” a system that lets users generate desktop widgets through natural language prompts. Gemini can pull information from Gmail, Calendar, search, reservations, reminders, and other Google services to build personalized dashboards dynamically.

The company said Googlebook is designed to function more fluidly across phones and laptops. Features such as Quick Access allow users to browse and use files stored on Android smartphones directly from the laptop without transferring files manually. Mobile apps can also run inside the desktop environment while preserving workflow continuity.

Googlebook hardware will be manufactured through partnerships with Acer, ASUS, Dell Technologies, HP Inc., and Lenovo. Google said the devices will feature premium materials and a new “glowbar” design element intended to visually distinguish Googlebook laptops.

The first Googlebook devices are scheduled to launch this fall.

Google Pushes Gemini Beyond Apps Into Operating Systems

The launch represents one of Google’s clearest attempts so far to position Gemini not simply as an assistant, but as the core interaction layer for future computing devices.

Rather than opening separate AI applications or chat interfaces, Googlebook integrates Gemini directly into the operating system itself through cursor interactions, contextual actions, dynamic widgets, and continuous multitasking support.

The Magic Pointer feature is especially notable because it changes the cursor from a passive navigation tool into an AI-aware interaction system capable of interpreting onscreen context in real time. That approach mirrors a broader industry shift toward embedding AI directly into operating systems and interface layers rather than treating it as an isolated chatbot.

Google also appears to be using Googlebook to unify parts of Android and ChromeOS development into a more integrated AI-first platform strategy.

AI Becomes Central To Personal Computing Competition

The announcement arrives as major technology companies increasingly compete to redesign personal computing around AI-native interfaces.

Laptop and desktop operating systems are evolving from application-centric environments toward systems where AI continuously interprets user context, predicts intent, and automates actions across workflows.

Googlebook positions Google more directly against AI-integrated computing initiatives from companies including Microsoft and Apple, both of which are also embedding generative AI deeper into operating systems and productivity ecosystems.

By combining Gemini with Android’s application ecosystem and Chrome’s browser dominance, Google is attempting to create a tightly integrated AI computing environment spanning phones, laptops, cloud services, and productivity tools. Meanwhile, OpenAI is reportedly accelerating development of its own AI-focused smartphone, which analyst Ming-Chi Kuo said could enter mass production as early as 2027.

AI & Machine Learning, Consumer Tech, News

Google Explores SpaceX Deal For Orbital Data Centers

Google is reportedly in talks with SpaceX and other launch providers as it explores deploying orbital data centers under its Project Suncatcher initiative. The discussions reflect growing interest in space-based AI infrastructure and computing capacity.

By Olivia Grant Edited by Maria Konash Published:
Google explores SpaceX launch deal for orbital AI data centers as Project Suncatcher targets 2027 prototypes. Image: ActionVance / Unsplash

Google is reportedly in talks with SpaceX over a potential rocket launch agreement tied to the company’s efforts to develop orbital data centers, according to a Wall Street Journal report citing people familiar with the discussions.

The report said Google is also holding conversations with other rocket-launch providers as it evaluates infrastructure options for deploying computing systems in space. The initiative is connected to Google’s previously disclosed Project Suncatcher program, which aims to research space-based data center technology and launch two prototype satellites by early 2027.

Project Suncatcher was first revealed in November as part of Google’s long-term exploration of alternative AI infrastructure systems. The project focuses on whether orbital computing platforms could eventually help address growing energy, cooling, and land constraints associated with terrestrial AI data centers.

A partnership with SpaceX would mark another instance of Elon Musk cooperating commercially with AI rivals he has publicly criticized in the past. Musk has repeatedly attacked Google’s AI strategy while simultaneously expanding his own AI infrastructure ambitions through xAI and SpaceX.

Space-Based Computing Gains Attention In AI Industry

The idea of orbital data centers has shifted from theoretical research toward early-stage infrastructure planning as AI companies search for ways to overcome physical limitations facing existing compute expansion.

Space-based infrastructure offers several potential advantages, including access to uninterrupted solar energy, reduced land and cooling constraints, and theoretically massive long-term compute scalability if launch costs continue declining.

However, major technical challenges remain, including radiation exposure, hardware reliability, maintenance logistics, latency management, and the economics of deploying large-scale compute systems into orbit.

AI Infrastructure Race Expands Beyond Earth

Competition in artificial intelligence is expanding into infrastructure ownership and compute deployment strategy rather than focusing solely on model development.

Last week, Anthropic signed an agreement to access the full compute capacity of SpaceXAI’s Colossus 1 supercomputer facility in Memphis, adding more than 220,000 NVIDIA GPUs to support Claude training and inference workloads. The partnership also included discussions around developing multiple gigawatts of orbital compute infrastructure.

The move followed Musk’s decision to merge xAI directly into SpaceX under a new SpaceXAI structure combining AI models, compute infrastructure, and aerospace operations into a single organization. Analysts said the consolidation could give SpaceXAI a unique advantage if orbital AI infrastructure becomes commercially feasible in the coming years.

AI & Machine Learning, Cloud & Infrastructure, News

U.S. Banks Rush To Fix Vulnerabilities Found By Anthropic Mythos

Major U.S. banks are rapidly patching software vulnerabilities uncovered by Anthropic’s Mythos AI model as concerns grow over AI-driven cybersecurity risks. The system is reportedly identifying weaknesses and attack chains at speeds beyond traditional security workflows.

By Maria Konash Published:
U.S. banks speed up software patching after Anthropic’s Mythos AI uncovers widespread cybersecurity vulnerabilities. Image: David Vincent / Unsplash

Major U.S. banks are racing to patch IT system vulnerabilities identified by Anthropic’s powerful Mythos AI model, triggering urgent software upgrades and faster cybersecurity remediation processes across the banking sector.

According to sources familiar with the matter, several of the country’s largest financial institutions currently have access to Claude Mythos Preview through Anthropic’s Project Glasswing initiative. As banks analyze the findings, they are reportedly uncovering large numbers of previously low- or moderate-priority weaknesses that the AI system can chain together into higher-risk attack paths.

The vulnerabilities span both proprietary and open-source software, with older legacy systems drawing particular scrutiny because of outdated software support and slower patching cycles. Multiple sources said banks are now fixing vulnerabilities within days that previously may have remained unresolved for weeks.

The accelerated remediation effort is also creating operational pressure inside financial institutions. Sources said some banks may need to temporarily take systems offline more frequently to implement updates and security fixes, though institutions are attempting to minimize disruption for customers.

“This is a wake-up call because cyber risk is moving to machine speed, while much of bank defense still operates at human speed,” said Nitin Seth, co-founder and CEO of data and AI services firm Incedo.

Mythos has reportedly proven especially effective at identifying complex attack chains by linking together multiple seemingly minor weaknesses into broader exploitable vulnerabilities. One banking source described the system as forcing institutions into remediation timelines “never previously contemplated.”

Access to Mythos remains limited because of both safety concerns and infrastructure costs. Anthropic initially restricted availability to Project Glasswing partners and a small group of additional organizations. Banks reportedly using the system include JPMorgan Chase, Goldman Sachs, Citigroup, Bank of America, and Morgan Stanley.

AI-Driven Cybersecurity Changes Banking Operations

The rapid adoption of Mythos highlights how advanced AI systems are beginning to reshape cybersecurity operations inside highly regulated industries.

Unlike conventional vulnerability scanners, Mythos reportedly demonstrates stronger reasoning capabilities capable of connecting isolated weaknesses into realistic attack scenarios. Regulators and cybersecurity experts have increasingly warned that frontier AI systems could dramatically accelerate both cyber defense and cyber offense.

A senior banking regulatory official told Reuters the model had proven “as powerful as anticipated,” particularly in its ability to connect vulnerabilities that human analysts might take far longer to identify.

The pressure is especially acute for banks because financial systems often rely on decades-old infrastructure, proprietary software stacks, and interconnected legacy environments that are difficult to modernize quickly without operational risk.

High Costs Create Uneven Access To Frontier Cyber AI

One major challenge for smaller banks is the cost and infrastructure required to use frontier cybersecurity models effectively.

Anthropic prices Mythos at $25 per million input tokens and $125 per million output tokens, making it significantly more expensive than its widely available Claude Opus 4.7 model. Anthropic has said it will provide $100 million in credits to Project Glasswing participants and Mythos customers to support research-preview usage.

Cybersecurity firms involved in Project Glasswing said the model requires entirely new workflows and methodologies to operate effectively. Adam Meyers of CrowdStrike said his team spent an entire weekend developing processes for using Mythos before actively searching for vulnerabilities.

Anthropic has separately attempted to broaden defensive access through Claude Security and published recommendations for organizations without direct Mythos access. The company has also expanded enterprise cybersecurity offerings through its recently announced financial services AI platform and a separate $1.5 billion AI deployment venture backed by firms including Blackstone and Goldman Sachs aimed at helping organizations operationalize Claude-based systems.

AI & Machine Learning, Cybersecurity & Privacy, Enterprise Tech, News
Exit mobile version