Google Expands Personal Intelligence Across Search and Gemini to All US Users

Google is expanding Personal Intelligence in the U.S., enabling Gemini and AI Search to deliver personalized answers by connecting data across apps like Gmail and Photos.

By Samantha Reed Edited by Maria Konash Published:
Google Expands Personal Intelligence Across Search and Gemini to All US Users
Google expands Personal Intelligence across Search and Gemini. Image: Google

Google is expanding its Personal Intelligence capabilities across AI Mode in Search, the Gemini app, and Gemini in Chrome in the United States, aiming to deliver more personalized and context-aware AI experiences.

The feature allows users to connect data across Google services such as Gmail and Google Photos, enabling AI systems to generate responses tailored to individual preferences, history, and behavior—without requiring users to manually provide context.

More Personalized, Context-Aware Responses

With Personal Intelligence enabled, users can receive highly customized recommendations and assistance. For example, the system can suggest products based on past purchases, troubleshoot devices using purchase history, or generate travel plans based on previous trips and bookings.

Google says the goal is to move beyond generic responses and provide answers that reflect each user’s unique context, such as preferred brands, habits, and schedules.

The feature also enables more dynamic use cases, including:

  • Personalized shopping recommendations aligned with style and past purchases
  • Context-aware tech support based on owned devices
  • Travel suggestions tailored to itineraries and preferences
  • Activity recommendations based on interests and behavior

Privacy and User Control

Google emphasized that Personal Intelligence is built with privacy, transparency, and user control at its core. Users must explicitly choose to connect their data sources and can disable access at any time.

The company also stated that its AI models do not directly train on personal data from Gmail or Photos. Instead, limited information, such as prompts and responses, is used to improve system performance.

Rolling Out in the U.S.

Personal Intelligence is now available in the U.S. within AI Mode in Search and is beginning to roll out in the Gemini app and Chrome integration for free-tier users.

The feature is currently limited to personal Google accounts and is not yet available for Workspace business or enterprise users.

The expansion reflects Google’s broader push to make AI more personal, proactive, and seamlessly integrated into everyday digital workflows.

AI & Machine Learning, Consumer Tech, News

OpenAI Shifts Strategy Toward Enterprise AI and Developer Tools

OpenAI is preparing a strategic pivot toward coding and enterprise users, as leadership evaluates which projects to scale back after a period of rapid expansion.

By Samantha Reed Edited by Maria Konash Published: Updated:
OpenAI Shifts Strategy Toward Enterprise AI and Developer Tools
OpenAI refocuses on coding, productivity, and enterprise, scaling back side projects. Image: Gavin Phillips / Unsplash

OpenAI is finalizing plans for a major strategic shift aimed at refocusing the company on coding, productivity, and enterprise customers, according to reporting from The Wall Street Journal.

At a recent all-hands meeting, Fidji Simo, OpenAI’s head of applications, told employees that leadership, including CEO Sam Altman and chief research officer Mark Chen, is actively reviewing which initiatives to deprioritize. Staff are expected to be informed of specific changes in the coming weeks.

The move signals a transition away from a broad, multi-product expansion toward a more concentrated focus on core business areas.

From Expansion to Focus

In recent months, OpenAI has launched a wide range of initiatives, including the Sora video generation model, new commercial tools, and hardware-related efforts. While these projects expanded the company’s reach, they also introduced challenges around prioritization and resource allocation.

The new strategy aims to streamline operations, reduce fragmentation, and concentrate efforts on areas where OpenAI has the strongest traction, particularly developer tools and enterprise AI solutions.

Some features, such as video capabilities, may be integrated directly into core products rather than developed as standalone offerings.

Rising Competition and IPO Pressure

The shift comes as competition intensifies, particularly from Anthropic, which has gained momentum in the enterprise market with tools focused on automation and business workflows.

OpenAI is now looking to strengthen its position among developers and business users, key segments that drive recurring revenue and long-term adoption.

The strategic refocus may also be tied to potential IPO plans, as the company works to sharpen its value proposition and improve operational efficiency ahead of a possible public offering.

Overall, the changes mark a shift from rapid experimentation to disciplined execution, as OpenAI aims to consolidate its leadership in enterprise AI and software development.

AI & Machine Learning, Enterprise Tech, News

xAI Hires Bankers and Traders to Train Grok on Financial Markets

Elon Musk’s xAI is recruiting investment bankers, traders, and crypto experts to train its Grok chatbot on financial markets as it competes for enterprise customers.

By Samantha Reed Edited by Maria Konash Published:
xAI Hires Bankers and Traders to Train Grok on Financial Markets
xAI hires bankers, traders, and crypto experts to train Grok on financial markets, targeting enterprise AI competition. Image: Anne Nygård / Unsplash

Elon Musk’s xAI is actively recruiting bankers, traders, and private credit specialists to train its chatbot Grok on financial markets, signaling a push into high-value enterprise use cases.

Job postings show the company is seeking candidates with deep experience in equity capital markets, including IPOs, underwriting, bookbuilding, and deal structuring. Roles target professionals at senior levels, such as analysts, associates, vice presidents, and directors in investment banking.

The company is also hiring crypto specialists to teach Grok how to analyze blockchain data, model tokenomics, and navigate volatile digital asset markets.

Competing for Enterprise AI Customers

The hiring push reflects xAI’s broader effort to compete with rivals like OpenAI and Anthropic, which have already launched products tailored for financial workflows, including tools for data analysis, reporting, and market research.

As AI companies increasingly target enterprise users, who are willing to pay for advanced capabilities, domain-specific expertise has become critical for improving model performance in specialized fields like finance.

Elon Musk recently acknowledged that xAI’s initial product was not built optimally and said the company is now “being rebuilt from the foundations up.”

Training Bottlenecks and Strategy Shift

According to reports, xAI relies on teams of human trainers to refine Grok’s responses by feeding it structured knowledge and adjusting outputs. One of the key challenges has been limited training data, as Grok has largely relied on content from Musk’s social platform X.

To address this, xAI is prioritizing the hiring of AI tutors with real-world financial expertise to expand the model’s knowledge beyond social media data.

The effort underscores a broader trend across the AI industry: combining large language models with domain experts to improve accuracy, reliability, and usefulness in professional environments such as finance.

AI & Machine Learning, Enterprise Tech, News

Nvidia Projects $1 Trillion in AI Chip Demand as Jensen Huang Unveils New Systems

Nvidia CEO Jensen Huang said demand for Blackwell and Vera Rubin systems could reach $1 trillion by 2027, as the company unveiled new chips, racks, and AI infrastructure at GTC.

By Samantha Reed Edited by Maria Konash Published:
Nvidia Projects $1 Trillion in AI Chip Demand as Jensen Huang Unveils New Systems
Nvidia projects $1T AI chip demand by 2027 as Jensen Huang unveils Vera Rubin, Groq 3 LPU, and new AI infrastructure at GTC. Image: Đào Hiếu / Unsplash

At Nvidia’s annual GTC developer conference, CEO Jensen Huang said the company expects purchase orders for its Blackwell and Vera Rubin systems to reach $1 trillion by 2027, doubling earlier projections of a $500 billion opportunity.

The updated forecast reflects surging demand for AI infrastructure as companies scale from chatbot deployments to agentic AI systems, which generate significantly more compute-intensive workloads.

“If they could just get more capacity, they could generate more tokens, their revenues would go up,” Huang said during his keynote in San Jose.

Nvidia, now valued at roughly $4.5 trillion, continues to benefit from explosive demand for its GPUs. The company expects 77% year-over-year revenue growth this quarter, extending a streak of strong performance driven by AI adoption.

New Chips, Systems, and Architecture Announced

Huang introduced several new technologies, including the upcoming Vera Rubin platform, expected to launch later this year. The system, made up of 1.3 million components, is designed to deliver 10x better performance per wattcompared to the previous generation—an important advancement as energy consumption becomes a key constraint in AI infrastructure.

Nvidia also unveiled the Groq 3 Language Processing Unit (LPU), part of technology acquired through a $20 billion deal. The chip is designed to enhance inference performance and will ship in the third quarter.

A new Groq LPX rack, housing 256 LPUs, will work alongside Vera Rubin systems to significantly boost efficiency. According to Huang, the setup can improve tokens-per-watt performance by up to 35x.

Looking further ahead, Nvidia previewed Kyber, its next-generation rack architecture, which integrates 144 GPUs in a vertical configuration to increase density and reduce latency. Kyber is expected to debut in Vera Rubin Ultra systems in 2027.

Focus on Agentic AI and Developer Ecosystem

Huang highlighted the rapid rise of agent-based AI systems, pointing to the growing popularity of the open-source project OpenClaw. He introduced NemoClaw, a new reference stack designed to help developers build enterprise-ready AI agents using Nvidia infrastructure.

“It finds OpenClaw, it downloads it. It builds you an AI agent,” Huang said.

The announcements underscore Nvidia’s strategy to support the full AI stack: from chips and data center systems to developer tools and agent frameworks.

Expanding Into Autonomous Systems

Beyond data centers, Nvidia continues to push into autonomous systems. Huang said ride-hailing company Uber plans to deploy fleets powered by Nvidia’s Drive AV software across 28 cities globally by 2028, starting with Los Angeles and San Francisco next year.

Automakers including Nissan, BYD, Geely, Isuzu, and Hyundai are also developing Level 4 autonomous vehiclesusing Nvidia’s Drive Hyperion platform, while additional partners are building autonomous buses powered by Nvidia’s AGX Thor chip.

The keynote reflects Nvidia’s growing role at the center of the AI ecosystem, as demand accelerates across enterprise software, autonomous systems, and next-generation computing infrastructure.

LIVE: Nvidia CEO Jensen Huang GTC 2026 Keynote

Nvidia CEO Jensen Huang will deliver the keynote at the GTC 2026 conference, where investors expect new AI product announcements and demand outlook updates.

By Samantha Reed Edited by Maria Konash Published: Updated:

Nvidia CEO Jensen Huang is set to deliver the keynote address at the company’s GTC 2026 conference in San Jose, California, one of the technology industry’s largest events focused on artificial intelligence.

Investors and developers are expected to watch closely for announcements related to new AI chips, data center hardware, and potential partnerships. The conference has historically served as a platform for Nvidia to unveil major updates to its computing architecture and outline demand trends for AI infrastructure.

Often described as the “Super Bowl of AI,” the event draws thousands of developers, researchers, and technology executives. Nvidia’s graphics processing units are widely used to train and run advanced AI models across cloud platforms and enterprise data centers.

Beyond technical sessions, GTC has also gained a reputation as a cultural gathering for the AI community, featuring appearances from public figures across entertainment and technology sectors alongside product demonstrations and developer workshops.

AI & Machine Learning, Cloud & Infrastructure, News

ByteDance Reportedly Delays Global Launch of Seedance 2.0 AI Video Generator

ByteDance has reportedly paused the global rollout of its Seedance 2.0 AI video generator after viral clips sparked backlash from Hollywood studios over intellectual property concerns.

By Samantha Reed Edited by Maria Konash Published:
ByteDance Reportedly Delays Global Launch of Seedance 2.0 AI Video Generator
ByteDance delays the global launch of Seedance 2.0 after viral clips and Hollywood legal threats over IP concerns. Image: Claudio Schwarz / Unsplash

ByteDance has reportedly paused plans to release its Seedance 2.0 AI video generator globally, following legal concerns raised by Hollywood studios, according to a report from The Information.

The model debuted in China in February, where short clips generated by the system quickly spread across social media. One widely shared video depicted Tom Cruise fighting Brad Pitt, drawing attention for its realism but also triggering criticism from the film industry.

Some screenwriters and filmmakers warned that tools like Seedance could threaten creative professions, while major studios moved quickly to challenge the technology’s potential use of copyrighted characters and likenesses.

Hollywood Pushback Over Intellectual Property

According to the report, several studios sent cease-and-desist letters to ByteDance after the viral clips appeared online. Lawyers representing Disney reportedly accused the company of carrying out a “virtual smash-and-grab” of the studio’s intellectual property.

In response to the criticism, ByteDance said it would implement stronger safeguards to protect intellectual property within the system.

The company had originally planned to launch Seedance 2.0 internationally in mid-March, but the rollout has now been delayed while engineers and legal teams work to address potential compliance and copyright issues.

ByteDance has not publicly confirmed the delay and did not immediately respond to requests for comment.

The situation highlights the growing tension between generative AI companies and the entertainment industry, as tools capable of producing realistic video content raise new questions about copyright, likeness rights, and creative ownership.