Gartner: AI, Emotion Analytics, and Digital Twins Drive Sales Transformation

Gartner identifies AI agents, emotion AI, and digital twins as key drivers of modern sales transformation. These technologies are reshaping how organizations engage and understand customers.

By Maria Konash Published: Updated:
Gartner: AI, Emotion Analytics, and Digital Twins Drive Sales Transformation

Chief sales officers (CSOs) are increasingly embracing AI and advanced technologies to transform traditional sales practices, according to Gartner, Inc. The firm’s 2025 Hype Cycle for Sales Transformation highlights innovations enabling sales organizations to adapt to evolving buyer behaviors and shifting purchasing trends. Gartner Hype Cycles track technology maturity and adoption, providing guidance on potential business impact and deployment strategies.

AI agents for sales, currently at the Peak of Inflated Expectations, are autonomous or semi-autonomous systems designed to perceive, decide, and act within digital sales environments. Powered by large language models (LLMs), these agents go beyond traditional AI assistants by autonomously planning and executing tasks with human-like reasoning. Organizations view them as proactive contributors to revenue generation. Major platform vendors and investors have accelerated development since 2024, though many anticipated capabilities still exceed current technological limits.

Emotion AI, positioned in the Trough of Disillusionment, uses AI to assess emotional states through computer vision, voice analysis, sensors, and software logic. The technology allows for personalized responses based on a customer’s mood. Adoption, however, is constrained by privacy concerns and regulatory barriers, such as the EU AI Act restrictions on computer vision-based emotion detection in education. Initial excitement has given way to caution as enterprises encounter implementation challenges.

Digital twins of a customer (DToCs) are in the Innovation Trigger phase. These virtual representations simulate, emulate, and anticipate customer behavior, enabling organizations to enhance data insights, personalize experiences, and improve strategic decision-making. While early use cases focus on monitoring product performance and optimizing customer interactions, adoption is still limited. Real-world deployment requires significant customization, reflecting the technology’s nascent stage.

Shayne Jackson, VP Analyst in Gartner’s Sales Practice, emphasized the urgency of integrating AI and digital capabilities into sales operations. “Sales transformation seems to be a never-ending state for today’s CSOs. The integration of AI, digital integrations, and intelligence, and new modes of operational adaptability can no longer be delayed,” Jackson said.

As these technologies evolve, CSOs are expected to continue leveraging AI agents, emotion analytics, and digital twins to increase efficiency, deepen customer understanding, and drive revenue growth. While adoption stages vary, Gartner predicts these innovations will play a central role in shaping the future of sales transformation.

AI & Machine Learning, Enterprise Tech, News, Research & Innovation

OpenAI Prepares for Landmark IPO Targeting up to $1 Trillion Valuation

OpenAI is laying the groundwork for an initial public offering that could value the company at up to $1 trillion, positioning it as one of the most anticipated tech listings in history.

By Maria Konash Published: Updated:
OpenAI Prepares for Landmark IPO Targeting up to $1 Trillion Valuation
ChatGPT and OpenAI have become global symbols of the AI boom — their rapid growth and soaring valuation now paving the way for one of the biggest IPOs in tech history. Photo: Levart_Photographer / Unsplash

OpenAI is preparing for what could become one of the largest initial public offerings in history, exploring a listing that could value the company at up to $1 trillion, according to people familiar with the discussions. The move would position the ChatGPT maker alongside Apple, Microsoft, and Nvidia as one of the most valuable publicly traded firms in the world.

Sources told Reuters and WSJ that OpenAI has begun internal planning for a potential IPO filing as soon as late 2026, though Chief Financial Officer Sarah Friar has reportedly discussed 2027 as a more realistic timeline. The company is weighing an initial raise of around $60 billion, but advisors say that number could climb significantly depending on market conditions and revenue performance.

“An IPO is not our focus, so we could not possibly have set a date,” an OpenAI spokesperson said. “We are building a durable business and advancing our mission so everyone benefits from AGI.” Still, the company’s evolving structure and rising financial momentum suggest that going public is no longer a distant scenario.

OpenAI’s annualized revenue run rate is projected to reach $20 billion by the end of 2025, a tenfold increase from just two years ago. That growth has been driven by widespread adoption of ChatGPT, API licensing, and enterprise partnerships for custom AI models. Yet insiders say that losses have also grown substantially, as the company pours resources into data centers, model training, and chip procurement.

During a livestream earlier this week, CEO Sam Altman addressed investor speculation directly. “I think it’s fair to say it is the most likely path for us, given the capital needs that we’ll have,” he said, referring to the company’s expanding infrastructure investments.

The prospect of a trillion-dollar IPO comes after a major corporate restructuring that streamlined OpenAI’s hybrid governance model. The nonprofit entity – now named the OpenAI Foundation – retains a 26% stake in OpenAI Group and holds warrants for additional shares tied to performance milestones. This shift reduces the company’s dependence on Microsoft while preserving its nonprofit oversight principles.

A successful offering would mark a windfall for major investors including Microsoft, which holds about 27% of OpenAI, along with SoftBank, Thrive Capital, and Abu Dhabi’s MGX. Microsoft has invested over $13 billion in the company since 2019, and the partnership continues to underpin OpenAI’s infrastructure and Azure-based computing backbone.

The IPO would also enable OpenAI to raise capital more efficiently, pursue strategic acquisitions, and finance Altman’s long-term goal of building AI supercomputing capacity on a multi-trillion-dollar scale. Insiders say the company is already exploring new chip development initiatives and expanding collaborations with Broadcom and AMD to support its exponential computing needs.

The timing coincides with record investor enthusiasm for AI. CoreWeave, another AI infrastructure company, went public earlier this year at a $23 billion valuation and has since tripled in market value. Meanwhile, Nvidia became the world’s first $5 trillion company this week, cementing its dominance in AI hardware.

If OpenAI proceeds with a 2026–2027 listing, it could set a new benchmark for the public markets – both in scale and in the strategic significance of artificial intelligence. For investors, it would represent the ultimate bet on the technology driving the next decade of global innovation.

Vention Unveils AI Operator and Developer Toolkit to Accelerate Intelligent Manufacturing

Vention expands its AI-powered full-stack automation platform with AI Operator, a new Developer Toolkit, and advanced simulation features, enabling manufacturers to achieve faster, smarter, and safer automation from design to deployment.

By Maria Konash Published: Updated:
Vention Unveils AI Operator and Developer Toolkit to Accelerate Intelligent Manufacturing

Vention, the creator of the world’s only AI-powered full-stack software and hardware automation platform, today unveiled a new suite of tools and features at its 6th annual Demo Day, advancing its Zero-Shot Automation™ vision. These updates allow manufacturers to implement automation faster, without complex programming or traditional hardware integration, while enabling developers and roboticists to innovate more efficiently.

Since its first Demo Day in 2020, Vention has evolved from a rapid machine-design platform into a fully software-defined automation ecosystem, now supporting over 25,000 machines in 4,000 factories worldwide. The platform is trusted for custom projects and turnkey applications like palletizing, welding, and machine tending, helping companies accelerate production and achieve measurable ROI.

“Our mission is to make industrial automation accessible and intelligent for everyone,” said Etienne Lacroix, Founder and CEO of Vention. “Zero-Shot Automation allows AI to operate seamlessly from cloud to edge, creating faster, smarter, and safer manufacturing processes.”

AI Operator Brings Advanced AI to the Factory Floor

The global rollout of AI Operator expands Vention’s automation capabilities, enabling AI-driven solutions for complex, unstructured tasks such as bin picking. Built on the MachineMotion AI controller and leveraging NVIDIA Isaac libraries and models, AI Operator delivers real-time perception, motion planning, and collision-free operations directly at the edge, reducing development time and increasing operational intelligence.

New Developer Toolkit and Simulation Features Empower Teams

  • Developer Toolkit: Offers a CLI, project templates, and prebuilt libraries for device communication, state machines, data storage, and operator HMIs, letting developers work locally or in the cloud.
  • Simulation Checker: Provides realistic factory-floor simulations with accurate physics and collision behavior, ensuring designs are validated before deployment.
  • RemoteView: Captures complete operational histories with alerts and status updates, improving safety, troubleshooting, and process optimization.
  • Vention Projects: Centralizes planning and collaboration with the industry’s largest library of machine specifications, streamlining workflows and reducing errors.

NVIDIA Highlights AI’s Role in Agile Manufacturing

Amit Goel, Head of Robotics & Edge Computing at NVIDIA, highlighted the increasing role of GPU-accelerated AI in flexible manufacturing systems. Customers such as Cripps & Sons Woodworking, McAlpine & Co., and Solestial, Inc., shared how Vention’s AI-powered automation shortened design-to-deployment timelines and improved factory efficiency.

With these innovations, Vention reinforces its position as the leading AI-powered automation platform, helping manufacturers worldwide achieve faster, smarter, and safer operations.

AI & Machine Learning, News, Robotics & Automation

IBM Unveils AI Model Built for Secure Defense and National Security Operations

IBM introduces the IBM Defense Model, a specialized AI solution co-developed with Janes, designed to deliver accurate, mission-critical intelligence in classified, edge, and air-gapped environments.

By Maria Konash Published: Updated:
IBM Unveils AI Model Built for Secure Defense and National Security Operations

IBM (NYSE: IBM) today announced the general availability of the IBM Defense Model, a purpose-built artificial intelligence solution engineered for defense and national security applications. Developed in collaboration with Janes, a leading open-source defense intelligence provider, the model merges IBM’s enterprise-grade AI with Janes’ specialized datasets to empower defense agencies with faster, more precise, and actionable intelligence.

Unlike standard large language models, the IBM Defense Model is optimized for military and government tasks, enabling deployment in air-gapped, classified, and edge environments. Built on IBM’s Granite foundation models and accessible via watsonx.ai, it supports mission-critical workflows including strategic planning, analyst reporting, wargaming, and simulation, delivering reliable insights without compromising security.

“Defense organizations need AI they can trust—tools that provide accurate intelligence while upholding ethical and security standards,” said Vanessa Hunt, General Manager, Technology, U.S. Federal Market at IBM. “The IBM Defense Model equips agencies to enhance operational readiness and make informed decisions in high-stakes environments.”

Highlights of the IBM Defense Model:

  • Domain-Specific Intelligence: Trained on military doctrine and enriched with Janes’ authoritative datasets, the model interprets real-time intelligence like an expert analyst, minimizing hallucinations and ensuring relevance.
  • Enterprise-Grade Foundation: Powered by ISO 42001-certified Granite models, offering governed, transparent, and trustworthy AI performance.
  • Secure Deployment Options: Fully compatible with classified networks, air-gapped environments, and edge computing scenarios.
  • Continuous Updates: Integrates Janes’ live intelligence feeds to provide up-to-date operational insights.
  • Mission-Focused Use Cases: Supports defense planning, reporting, document enrichment, simulations, and wargaming exercises.

“By combining Janes’ trusted intelligence with IBM’s AI technology, this model provides timely, actionable insights in secure environments,” said Blake Bartlett, CEO of Janes. “It enables defense agencies to confidently act on accurate information, even in highly regulated or sensitive contexts.”

The IBM Defense Model reflects IBM’s growing focus on fit-for-purpose AI solutions, offering specialized, secure, and practical intelligence tools for the defense sector.

AI & Machine Learning, News

Cisco Becomes First to Offer NVIDIA Cloud Partner-Compliant Reference Architecture with N9100 Switches

Cisco and NVIDIA unveil industry-first solutions for AI-ready data centers, including the N9100 series switch and Secure AI Factory enhancements, empowering enterprises, neocloud, and telecom customers with unmatched flexibility, performance, and security.

By Maria Konash Published: Updated:
Cisco Becomes First to Offer NVIDIA Cloud Partner-Compliant Reference Architecture with N9100 Switches
Photo: Cisco

Cisco (NASDAQ: CSCO) today announced a major leap forward in AI infrastructure, unveiling the Cisco N9100 series switch and the first NVIDIA Cloud Partner-compliant reference architecture for neocloud and sovereign cloud deployments. The announcements, made at GTC, highlight Cisco’s partnership with NVIDIA to provide enterprises, service providers, and telecom operators with flexible, scalable, and secure AI-ready solutions.

The Cisco N9100 series switch, powered by NVIDIA Spectrum-X Ethernet silicon, offers customers the choice of NX-OS or SONiC operating systems, supporting high-performance AI networking with unprecedented flexibility. Leveraging Cisco Silicon One-based switches and embedded NVIDIA capabilities, the N9100 enables a unified operating model and full compatibility with NVIDIA Cloud Partner design principles.

“We’re at the beginning of the largest data center build-out in history,” said Jeetu Patel, President and Chief Product Officer at Cisco. “The infrastructure powering next-generation AI applications requires new architectures that overcome today’s limitations in power, compute, and network performance. Together with NVIDIA, we’re defining the technologies for AI-ready data centers across enterprises, neoclouds, and global service providers.”

Gilad Shainer, SVP of Networking at NVIDIA, added, “Spectrum-X Ethernet delivers the performance required for accelerated AI networking. Cisco’s reference architectures enable customers to deploy open, high-performance networks tailored to AI workloads using Cisco N9100 and Silicon One-based switches.”

Cisco Cloud Reference Architecture: Flexible AI Infrastructure

Cisco’s new Cloud Reference Architecture is designed for neocloud and sovereign cloud customers, integrating Cisco Silicon One and Cloud-scale ASICs, NVIDIA BlueField-4 DPUs, and NVIDIA ConnectX-9 SuperNICs. Enterprises can deploy Cisco AI PODs and Nexus switching to optimize GPU-based AI workloads, including generative AI fine-tuning and inference.

The Secure AI Factory with NVIDIA strengthens security and observability for enterprise AI deployments. Cisco AI Defense integrates with NVIDIA NeMo Guardrails to protect sensitive AI data, while Splunk Observability Cloud provides real-time monitoring of AI workloads. Kubernetes-based solutions, including Cisco Isovalent and Nutanix platforms, enable seamless orchestration and containerized inference services.

Pioneering AI-Native Wireless Networks

Cisco, NVIDIA, and telecom partners also introduced the first AI-native wireless stack for 6G networks, enabling AI-infused connectivity from 5G advanced services to pre-6G applications. The stack integrates Cisco’s 5G core and user plane functions with NVIDIA AI Aerial platforms, creating a foundation for efficient, secure, and AI-driven mobile networks.

Driving AI Adoption Across Industries

Through these innovations, Cisco and NVIDIA provide a comprehensive AI ecosystem spanning neoclouds, enterprises, and telecom providers. With flexible switching solutions, secure AI infrastructure, and next-generation wireless capabilities, organizations can efficiently scale AI while maintaining control, performance, and security.

PayPal Becomes First Digital Wallet Integrated Into ChatGPT for AI-Driven Shopping

PayPal has signed an exclusive deal with OpenAI to integrate its wallet into ChatGPT, allowing users to make secure purchases directly within the AI platform starting next year.

By Maria Konash Published: Updated:
PayPal Becomes First Digital Wallet Integrated Into ChatGPT for AI-Driven Shopping
PayPal partners with OpenAI to embed payments in ChatGPT, enabling users and merchants to buy and sell through the AI platform's 700 million weekly users. Photo: Brett Jordan / Pexels

PayPal has struck a landmark deal with OpenAI to become the first digital wallet embedded directly within ChatGPT, enabling users to make purchases and merchants to sell products through the world’s leading consumer AI platform. The agreement, finalized over the weekend, marks a major step toward AI-driven e-commerce and what executives are calling the beginning of “agentic shopping.”

“Hundreds of millions of people turn to ChatGPT each week for help with everyday tasks, including finding products they love, and over 400 million use PayPal to shop,” said Alex Chriss, President and CEO of PayPal. “By partnering with OpenAI and adopting the Agentic Commerce Protocol, PayPal will power payments and commerce experiences that help people go from chat to checkout in just a few taps for our joint customer bases.”

Starting next year, both sides of PayPal’s ecosystem – consumers and merchants – will be able to transact through ChatGPT. Users can search for items using conversational prompts and complete purchases via a “Buy with PayPal” button without leaving the chat.

Merchants in PayPal’s network will have their inventories integrated into ChatGPT’s marketplace, giving them access to the platform’s more than 700 million weekly users.

“We’ve got hundreds of millions of loyal PayPal wallet holders who now will be able to click the ‘Buy with PayPal’ button on ChatGPT and have a safe and secure checkout experience,” said Alex Chriss in an interview with CNBC.

The partnership represents one of the first large-scale integrations between a major fintech platform and a conversational AI system. It builds on OpenAI’s recent push to transform ChatGPT into a digital assistant capable of performing complex actions, from booking hotels to managing personal finances.

“It’s a whole new paradigm for shopping,” Chriss said. “It’s hard to imagine that agentic commerce isn’t going to be a big part of the future.”

OpenAI’s recent e-commerce integrations include partnerships with Shopify, Etsy, and Walmart, allowing ChatGPT users to browse, compare, and now purchase goods directly through the app. PayPal’s addition completes a critical layer — secure payments and merchant verification — that brings the AI shopping experience closer to a functioning marketplace.

Under the deal, PayPal will handle merchant routing, payment validation, and other back-end functions, so individual sellers don’t need to contract directly with OpenAI. The system will ensure that both merchants and buyers are verified, mitigating fraud and improving transaction transparency. Customers can pay with linked bank accounts, credit cards, or PayPal balances, while receiving full purchase protections, tracking, and dispute resolution.

Chriss emphasized that this integration goes beyond payments. “It’s not just that a transaction can happen,” he said. “It’s that this is a trusted set of merchants — the largest merchant network in the world from PayPal – that are verified, with the largest set of verified consumers in a consumer wallet.”

The company also revealed it is expanding the use of OpenAI’s enterprise AI tools internally to accelerate product development and enhance customer service operations. This includes using AI models for transaction analysis, fraud detection, and personalized merchant support.

Analysts view the partnership as a breakthrough moment for AI-driven commerce, one that could reshape how users discover and buy products online. With ChatGPT evolving into a conversational hub for search, recommendations, and transactions, PayPal’s integration gives OpenAI a built-in, secure payments layer – while positioning PayPal as the financial backbone of agentic AI.

The move continues PayPal’s broader strategy to align with AI’s future, following recent collaborations with Google and Perplexity AI. Together, these partnerships aim to anchor the company firmly in the next generation of digital commerce – one where consumers will simply tell their AI what they want, and the transaction will follow seamlessly.

Cathie Wood Predicts Humanoid Robots Will Be the Largest AI Opportunity Yet

Ark Invest CEO Cathie Wood says humanoid robots could become the most transformative opportunity in artificial intelligence, eclipsing sectors like autonomous transport and healthcare.

By Maria Konash Published: Updated:
Cathie Wood Predicts Humanoid Robots Will Be the Largest AI Opportunity Yet
Cathie Wood, CEO of Ark Invest, emphasizes her vision that AI-driven humanoid robots will redefine productivity, industry, and the global economy. Photo: ARK Invest / X

Cathie Wood, founder and CEO of Ark Invest, believes the next great wave of artificial intelligence will not live in code or the cloud — but in physical machines that look and move like humans.

Speaking to CNBC at the Future Investment Initiative conference in Riyadh, Saudi Arabia, Wood said humanoid robots – machines built to mirror human size, shape, and movement – could represent the single largest opportunity in the entire AI landscape.

“I know a lot of people are worried about all the ‘AI hype,’” Wood said. “But as we’re looking to the future, especially with embodied AI, which is all about robotaxis and transforming the world of transportation completely – and then healthcare, which is probably one of the most profound applications of AI, we think that this investment will pay off.”

She added, “I think the chaser is going to be humanoid robots. And I think that is going to be the biggest of all the embodied AI opportunities.”

Wood’s comments come amid a surge of investor and corporate interest in embodied AI — systems that integrate artificial intelligence into physical forms capable of movement and interaction. Major companies including Tesla, Figure AI, and Agility Robotics are racing to build humanoid platforms for manufacturing, logistics, and personal assistance.

Tesla CEO Elon Musk recently said that his company’s Optimus robot could eventually account for 80 percent of Tesla’s total value, underscoring how central embodied AI could become to the broader economy.

Wood’s remarks also reflect her firm’s investment philosophy. The ARK Artificial Intelligence & Robotics UCITS ETF, one of her flagship funds, holds significant stakes in Tesla (9.16%), Palantir (7.02%), and AMD (6.14%) – companies she views as foundational to the AI and robotics ecosystem.

Beyond robotics, Wood emphasized the massive productivity gains that AI could deliver in both enterprise and consumer markets. “It is going to take a while for large corporations to prepare themselves to transform,” she said, noting that companies like Palantir will be critical in restructuring enterprises to fully capture AI’s potential.

In the consumer realm, she said, the adoption curve is moving much faster. “The consumer loves all of this,” Wood added. “We’re all looking forward to our personal assistants doing our shopping for us. I’m really excited about how much my productivity as an individual is going to increase with AI. It already has in terms of research.”

Still, Wood cautioned that the market may experience a short-term “reality check” as AI valuations adjust, even as she maintained that elevated Big Tech prices make sense over a five-year horizon.

For Wood, the long-term picture remains clear: the physical embodiment of AI – from humanoid robots to autonomous systems – will be the next trillion-dollar frontier in technology.

Elon Musk Launches Grokipedia, an AI-Powered Rival to Wikipedia

Elon Musk has unveiled Grokipedia – an AI-driven encyclopedia built on Grok that aims to offer factual, bias-free knowledge as an alternative to Wikipedia.

By Maria Konash Published: Updated:
Elon Musk Launches Grokipedia, an AI-Powered Rival to Wikipedia
Elon Musk introduces Grokipedia - an AI-powered encyclopedia created to provide factual, unbiased information without human editorial control. Photo: Maria Konash / AIstify

Elon Musk has launched Grokipedia, a new AI-powered encyclopedia built on his company’s Grok language model, positioning it as a direct alternative to Wikipedia. The platform, now live with more than 885,000 articles, promises “objective, bias-free information” curated entirely by artificial intelligence.

According to Musk, the purpose of Grokipedia is to provide “an objective presentation of facts without ideological distortion.” The entrepreneur has long criticized Wikipedia for what he calls political and cultural bias, and Grokipedia appears to be his answer — a system where algorithms, not editors, decide what qualifies as factual.

Unlike Wikipedia, which relies on a global network of human contributors and consensus-based moderation, Grokipedia’s structure is fully automated. Users can submit suggestions and corrections, but the final decisions are made by the Grok AI system, not by a community of editors. The platform is open and free to use, designed to evolve through data-driven updates rather than editorial debates.

Early descriptions suggest that Grokipedia integrates Grok’s conversational capabilities, allowing users to query topics in natural language and receive dynamically generated summaries based on the AI’s understanding of verified sources. Over time, the system is expected to refine its knowledge base through continuous learning and contextual updates.

The launch underscores Musk’s ongoing campaign to challenge established digital platforms with AI-driven alternatives — from X (formerly Twitter) to xAI, his artificial intelligence company. Grokipedia, like Grok itself, aligns with his broader goal of building AI tools that reflect what he calls “truth-seeking intelligence,” in contrast to systems shaped by human bias or collective editing.

By positioning Grokipedia as a counterpoint to Wikipedia, Musk is inviting a philosophical debate about the future of knowledge: should the authority on facts belong to human consensus, or to machine reasoning?

As AI systems increasingly take part in content creation and curation, Grokipedia could become a test case for how artificial intelligence redefines trust, objectivity, and the nature of information itself.

Polish Emerges as the Most Efficient Language for AI, Surpassing English

A study by the University of Maryland and Microsoft found that Polish outperforms English and other major languages in long-context AI prompts, surprising researchers.

By Samantha Reed Published: Updated:
Polish Emerges as the Most Efficient Language for AI, Surpassing English
Researchers found that Polish delivers the highest AI efficiency in long-context prompts, challenging assumptions about English-language dominance in AI.

Polish, often considered one of the world’s most complex languages, has unexpectedly emerged as the most efficient for artificial intelligence models, outperforming English and several other global languages.

According to a joint study by the University of Maryland and Microsoft, Polish ranked first in long-context prompt performance across 26 languages, achieving an 88 percent efficiency score. By comparison, English – the dominant language for AI research and training – placed only sixth.

The research, titled “One Ruler to Measure Them All: Benchmarking Multilingual Long-Context Language Models,” evaluated how major AI systems like OpenAI’s o3-mini-high, Google’s Gemini 1.5 Flash, and Meta’s Llama 3.3 (70B) handle prompts extending up to 128,000 tokens.

Despite English being the primary training language for most AI models, it lagged behind several European languages. The top rankings were as follows:

  • Polish — 88%
  • French — 87%
  • Italian — 86%
  • Spanish — 85%
  • Russian — 84%
  • English — 83.9%

Researchers attribute Polish’s success to its grammatical richness and precise syntactic structure, which may help AI systems interpret meaning more efficiently in extended prompts. French and Italian followed closely, suggesting that linguistic complexity and inflection play key roles in enhancing AI comprehension.

The findings challenge the long-standing assumption that English, as the most resource-rich training language, automatically yields the best AI performance. Instead, languages with detailed morphology and logical structure — like Polish and other Romance or Slavic languages — may enable models to track relationships between words more effectively over long sequences.

According to the authors, these results could have far-reaching implications for multilingual AI development and model training optimization. They suggest that linguistic diversity is not a barrier but a potential advantage — revealing how smaller language communities can drive innovation in large language models (LLMs).

The results have also energized Poland’s growing AI ecosystem. Analysts note that the study could attract new research investment and data initiatives in the region, positioning Poland as a leading center for advanced natural language processing in Europe.

As AI becomes increasingly global, the Polish case highlights a broader truth: the quality of language structure — not just the quantity of training data — may define the next wave of breakthroughs in artificial intelligence.

AI & Machine Learning, News, Research & Innovation

OpenAI Unveils ChatGPT Atlas, an AI-Powered Browser Redefining How We Use the Web

OpenAI has unveiled ChatGPT Atlas, a new web browser with ChatGPT integrated directly into its interface, blending AI assistance, search, and memory into everyday browsing.

By Samantha Reed Published: Updated:

OpenAI has launched ChatGPT Atlas, a new web browser designed around its flagship AI assistant, bringing ChatGPT directly into users’ everyday browsing experience. Available first on macOS, Atlas reimagines how people search, work, and navigate online by integrating AI-driven context, automation, and personalization directly into the browser.

Described as “the browser with ChatGPT built in,” Atlas turns the web into an interactive workspace where ChatGPT can analyze content, understand user intent, and complete actions without requiring copy-paste or switching between tabs. The assistant can help summarize documents, research topics, or even plan trips — all from within the same window.

According to OpenAI, Atlas was designed as a “super-assistant” that understands what users are trying to do and can act on that context in real time. It includes Agent Mode, which allows ChatGPT to perform tasks such as booking appointments, generating reports, or analyzing data while browsing. Agent Mode launches in preview for Plus, Pro, and Business users, with Enterprise and Education access expected soon.

The browser also supports browser memories, letting ChatGPT remember context from visited sites to provide continuity across sessions. For instance, users can ask ChatGPT to “find all the job listings I viewed last week” or “summarize the research papers I opened yesterday.” These memories are private, optional, and fully controllable — users can view, archive, or delete them anytime.

In addition to contextual awareness, Atlas introduces advanced privacy controls, including incognito browsing and granular permission settings. Users can toggle ChatGPT’s page visibility directly from the address bar, ensuring AI access only when explicitly allowed. OpenAI clarified that browsing data is not used for model training by default, though users can opt in through data control settings.

The integration of search, chat, and automation blurs the line between browser and assistant. Instead of juggling multiple tools, users can ask ChatGPT to conduct research, manage documents, or automate workflows directly in the browsing window. As OpenAI puts it, Atlas “brings ChatGPT to the place where all your work, tools, and context already live.”

Early testers describe the experience as transformative. One user, a college student named Yogya Kalra, said Atlas helps her study more efficiently: “I used to switch between my slides and ChatGPT, taking screenshots to ask questions. Now ChatGPT instantly understands what I’m looking at and helps me learn as I go.”

ChatGPT Atlas launches globally for Free, Plus, Pro, and Go users, with Windows, iOS, and Android versions coming soon. For OpenAI, it’s a significant step toward embedding AI deeper into everyday computing — transforming the browser from a passive interface into an intelligent, collaborative tool.

AI & Machine Learning, News

Revolut Acquires AI Travel Agent Start-up Swifty to Deepen Lifestyle Ecosystem

Neobank Revolut has acquired Berlin-based AI travel agent start-up Swifty, integrating its conversational booking technology into Revolut’s loyalty and lifestyle platform for over 65 million users.

By Laura Bennett Published: Updated:
Revolut Acquires AI Travel Agent Start-up Swifty to Deepen Lifestyle Ecosystem
Revolut’s acquisition of Berlin-based AI travel startup Swifty marks its latest move beyond banking - bringing intelligent travel planning and loyalty integration to millions of users worldwide. Photo: Sophie Dupau / Unsplash

Revolut has taken a bold step into the lifestyle domain by acquiring Berlin-based AI travel start-up Swifty, originally incubated at the Lufthansa Innovation Hub, according to the TravelCapybara. The deal brings Swifty’s proprietary conversational booking engine and its founding team of Stanislav Bondarenko and Tomasz Przedmojski into Revolut’s loyalty and AI squad.

Swifty’s platform can autonomously handle the full travel booking lifecycle – flights, hotels, payments and receipts – via a simple chat interface. Restrategizing around financial services plus travel, Revolut says the integration will “drive even more personalised and seamless experiences” for its global customer base.

The acquisition aligns with Revolut’s shift from a digital bank to a broader lifestyle app: it currently serves more than 65 million users worldwide and has been expanding its “Ul­tra” and “Metal” tiers with perks beyond finance. As Revolut’s Head of Loyalty Christopher Guttridge put it: “This acquisition strengthens our position at the intersection of finance, AI and lifestyle.”

Swifty co-founders echoed that sentiment: “Joining forces with one of the world’s leading fintechs is a once-in-a-lifetime opportunity to scale our vision globally and enhance the lifestyle of over 65 million customers,” they said.

Integrating Swifty gives Revolut access to AI-agent technology engineered for travel – and by extension, lifestyle – automation. The technology offers a potential one-stop interface combining financial planning, savings suggestions and now travel arrangements, deeply tied to user context and spending. This convergence is increasingly important as fintechs race to embed AI assistants that act beyond transactions.

For Revolut, Swifty’s arrival complements an existing AI-financial assistant roadmap. It opens opportunities to cross-sell travel services, loyalty rewards and premium tiers. Moreover, by owning the underlying bot and algorithm, Revolut gains control of customer experience across both finance and lifestyle domains.

Industry observers see this move as an acknowledgment that fintech growth is not just about payments—it’s about context, habits and services around consumers’ lives. With Swifty’s tech onboard, Revolut can engage users not simply when they’re spending money, but anytime they’re planning, travelling or living.

However, execution remains key. Scaling a travel-booking engine globally with regulatory and logistic complexity is challenging. Revolut must integrate Swifty’s product into its existing app, monetize travel services and maintain user trust while extending AI automation.

Overall, the acquisition of Swifty reflects Revolut’s evolution from a challenger bank into a “super-app” that converges finance, AI and travel. In doing so, the company is betting that the next wave of user engagement will come not from better card features, but from AI-powered experiences that anticipate and act on life’s routines.

AI & Machine Learning, News, Startups & Investment

Andrew Tulloch Leaves $12B AI Startup to Join Meta After Turning Down $1.5B Offer

Andrew Tulloch, co-founder of the $12 billion AI startup Thinking Machines Lab, has joined Meta after previously rejecting what reports described as a $1.5 billion offer — a figure Meta has since called ‘inaccurate and ridiculous.’

By Samantha Reed Published: Updated:
Andrew Tulloch Leaves $12B AI Startup to Join Meta After Turning Down $1.5B Offer
Meta CEO Mark Zuckerberg, driven by his vision to build a world-class AI team at Meta, is determined to secure the industry’s brightest minds - no matter the cost. Photo: Mark Zuckerberg / Facebook

Andrew Tulloch, one of Silicon Valley’s most sought-after AI engineers, has left Thinking Machines Lab, the $12 billion startup he co-founded with former OpenAI CTO Mira Murati, to join Meta. The move follows months of speculation surrounding Tulloch’s next step – and comes after he reportedly declined a Meta offer earlier this year worth up to $1.5 billion over six years.

Tulloch’s departure marks a significant moment in the escalating talent wars among major AI players. According to sources familiar with the matter, his decision to leave Thinking Machines Lab was made for “personal reasons,” though neither company disclosed the terms of his new role. Meta confirmed the hire but declined to comment on compensation or responsibilities.

Tulloch began his career at Goldman Sachs before transitioning into the tech sector. In 2012, he joined Facebook (now Meta), where he spent more than a decade building machine learning and large-scale infrastructure systems. His work contributed to early versions of Meta’s AI architecture, forming the foundation for the company’s later advances in generative and recommendation AI.

In 2023, Tulloch joined OpenAI, where he helped design internal systems that scaled the training of large models, including GPT-4. These systems optimized OpenAI’s hardware utilization, cutting training times and operational costs, and became a cornerstone of the company’s expansion into large-scale inference workloads.

After leaving OpenAI, Tulloch co-founded Thinking Machines Lab in early 2025 with Murati. The startup quickly became one of the most closely watched newcomers in the AI space, raising $2 billion in funding and achieving a $12 billion valuation within months. Its flagship product, Tinker, launched as an API platform for fine-tuning and deploying large language models tailored to enterprise needs.

The company’s rapid rise drew interest from major tech firms. Mark Zuckerberg reportedly attempted to acquire Thinking Machines Lab earlier this year but was rebuffed. When the acquisition failed, Meta pursued Tulloch directly, offering him a compensation package estimated at $1.5 billion, tied to performance and stock options. Tulloch turned it down, and at the time, no senior staff left the startup.

His eventual move to Meta, announced this week, surprised many industry insiders. People close to the company described the decision as a “quiet personal shift” rather than a strategic exit. A spokesperson for Thinking Machines Lab said Tulloch “chose to step away to pursue a different path,” while reaffirming that Murati remains CEO and that the company’s roadmap remains unchanged.

Meta has not yet revealed Tulloch’s official title or responsibilities, but insiders suggest he will join Meta’s AI Research and Infrastructure group, focusing on large-model training efficiency and scaling.

Tulloch’s move underscores the intensity of competition among Big Tech firms vying for top AI talent – where billion-dollar offers, equity-heavy packages, and strategic hires are becoming standard in the race to dominate artificial intelligence’s next frontier.

Oracle Launches AI Agent Marketplace to Accelerate Enterprise Adoption Across Fusion Applications

Oracle has introduced an AI Agent Marketplace within its Fusion Applications suite, expanding AI Agent Studio and integrating top LLMs from OpenAI, Anthropic, and others to accelerate enterprise AI transformation.

By Samantha Reed Published: Updated:
Oracle Launches AI Agent Marketplace to Accelerate Enterprise Adoption Across Fusion Applications
Juan Loaiza, Oracle’s executive vice president of Database Technologies, delivers a keynote at Oracle AI World 2025 in Las Vegas - emphasizing how AI is fundamentally transforming the way data and applications are built, managed, and scaled. Photo: Oracle AI World / Facebook

Oracle has unveiled a major expansion of its AI platform, introducing the AI Agent Marketplace and enhanced AI Agent Studio as part of its Fusion Applications suite. The move, announced at Oracle AI World 2025, marks a significant milestone in the company’s effort to make AI agents a core component of enterprise operations.

The new AI Agent Marketplace allows customers to access, deploy, and customize partner-built, Oracle-certified AI agents designed for industry-specific use cases. These agents extend the functionality of Oracle’s pre-built AI tools within Fusion Applications, which span financial management, human resources, supply chain, and customer experience software. By integrating directly into existing workflows, the marketplace enables organizations to automate business processes without building AI infrastructure from scratch.

“Organizations are grappling with rising business complexity and the urgent need to accelerate AI adoption,” said Chris Leone, Oracle’s executive vice president of Applications Development. “With our AI Agent Marketplace, we are providing customers and partners a unified environment to securely scale automation, extend functionality, and generate tangible results across their business.”

Oracle also expanded its AI Agent Studio, the company’s low-code development environment for creating and managing AI agents. The platform now supports a broad set of large language models, including OpenAI, Anthropic, Cohere, Google, Meta, and xAI, allowing businesses to choose the model that best fits their requirements. New tools include multi-agent collaboration, retrieval-augmented generation (RAG) for context-aware responses, and observability dashboards for monitoring and optimizing performance.

The update builds on Oracle’s commitment to interoperability through the Model Context Protocol (MCP), which allows seamless communication between AI agents, enterprise data sources, and external applications. Companies can now deploy multi-modal AI solutions capable of interpreting text, documents, and visuals, while maintaining the governance and security controls Oracle is known for.

To accelerate adoption, Oracle is leveraging its ecosystem of more than 32,000 certified AI professionals. These specialists will help enterprises deploy agent-based automation while contributing pre-tested solutions to the marketplace. Oracle said this global network of experts will be instrumental in developing new templates for finance, operations, procurement, and customer support workflows.

Analysts say the new marketplace positions Oracle as a strong contender in the race to operationalize AI for large enterprises. “Oracle is setting a new standard for enterprise AI,” said Mickey North Rizza, Group Vice President at IDC. “The integration of AI agents into Fusion Applications demonstrates that Oracle is not just delivering tools but building an ecosystem for intelligent automation at scale.”

The expansion follows a series of recent Oracle announcements aimed at strengthening its AI capabilities. At the same conference, the company confirmed its plan to deploy 50,000 AMD Instinct MI450 GPUs by 2026, reinforcing its infrastructure backbone for AI training and inferencing workloads. Together, these initiatives underscore Oracle’s ambition to provide a vertically integrated AI platform that combines infrastructure, data, and software under one cloud.

“The Oracle Fusion Applications AI Agent Marketplace is an ideal platform to deploy KPMG’s deep industry and domain-specific AI agents directly into business workflows,” says Swami Chandrasekaran, KPMG Global AI & Data Labs Leader. “Our Purchase Order Item Price History agent is a great example of this—it autonomously assembles and evaluates historical data to deliver immediate, actionable procurement insights and next-step recommendations at the moment of decision. This marketplace is a key enabler for us at KPMG, as it helps us deploy and distribute this next-generation AI securely, responsibly, and at scale, allowing our clients to make critical business decisions with greater speed and confidence.”

As competitors like Microsoft and Salesforce race to embed AI assistants into business applications, Oracle’s approach stands out for its focus on controlled customization and enterprise security. The company’s strategy reflects a broader shift from AI experimentation to production-grade deployment — where success is measured by speed, compliance, and measurable return on investment.

Prezent Raises $30M to Acquire AI Services Firms, Starting with Prezentium

AI presentation startup Prezent has raised $30 million to acquire services companies and integrate domain expertise into its enterprise communication platform, beginning with Prezentium.

By Samantha Reed Published: Updated:
Prezent Raises $30M to Acquire AI Services Firms, Starting with Prezentium
Rajat Mishra, Founder and CEO of Prezent AI, outlines his vision to combine artificial intelligence with human creativity - building a new era of enterprise communication powered by both speed and storytelling. Photo: True Global Ventures

Prezent AI, a California-based startup using artificial intelligence to automate enterprise presentations, has raised $30 million in new funding and acquired Prezentium, a leading business presentation services firm specializing in the life sciences sector. The combined company aims to redefine corporate communication by fusing AI automation with expert human design and storytelling.

The round was led by Multiplier Capital and Nomura Strategic Ventures, with participation from True Global Ventures, Greycroft, and existing backers Emergent Ventures and Alumni Ventures, according to the True Global Ventures’ press release. The deal values Prezent AI at approximately $400 million, underscoring investor confidence in its “AI + human” model for large enterprises.

Founded by Rajat Mishra, Prezent AI has created what it calls a new category at the intersection of artificial intelligence, communication, and enterprise productivity. Unlike traditional presentation tools or agencies, its platform blends algorithmic automation with creative services, offering Fortune 2000 clients a single environment to accelerate and elevate their business communication.

“The acquisition of Prezentium strengthens this vision,” Mishra said. “Our north star is delivering great business communication outcomes — powered by AI acceleration and human expertise. With Prezentium joining our platform, we’re one step closer to building the complete AI-enabled communication lifecycle for enterprises.”

Prezentium, founded by Deepti Juturu, has built a strong reputation for rapid, high-quality presentation design for corporate clients in healthcare and technology. The integration brings together Prezent AI’s automation engine with Prezentium’s human designers, offering customers a comprehensive “AI + services” solution that delivers faster results and measurable productivity gains.

According to Mishra, the company’s goal is to disrupt the $20 billion global agency and consultancy market by providing an intelligent alternative that can transform data into business-ready presentations in minutes. Its platform enables users to build brand-aligned sales decks, executive reviews, and strategic updates at scale — all while maintaining corporate design standards.

Prezent AI’s technology integrates with enterprise systems through APIs and presentation agents, allowing organizations to create what it calls a “Company Presentation Brain” — a central knowledge model that compounds insights over time. This capability helps businesses reuse, adapt, and continuously improve communication assets rather than rebuilding from scratch for every new project.

Investors see the merger as a natural next step for the company. “The combined entity offers corporate clients far greater flexibility in how they approach business communication,” said Frank Desvignes, founding partner at True Global Ventures. “This unique blend of AI acceleration and human expertise enables enterprises to communicate faster and at scale.”

Prezent plans to expand its platform across industries including technology, finance, and manufacturing. Future updates will add multimodal presentation creation, allowing users to generate decks via text, voice, or video, and AI avatars for automated delivery — a move that places it among a new generation of companies merging automation with creative intelligence.

Based in Los Altos, California, Prezent AI now serves global clients seeking to modernize corporate storytelling and presentation workflows. With fresh capital and its first major acquisition complete, the company is positioning itself as a key player in transforming how enterprises think, design, and communicate ideas.

Oracle to Deploy 50,000 AMD AI Chips, Expanding Compute Power for OpenAI and Cloud Clients

Oracle Cloud Infrastructure will deploy 50,000 AMD Instinct MI450 GPUs beginning in 2026, underscoring the growing competition to Nvidia in the global race for AI computing capacity.

By Laura Bennett Published: Updated:
Oracle to Deploy 50,000 AMD AI Chips, Expanding Compute Power for OpenAI and Cloud Clients
Oracle’s headquarters in Austin, Texas - the company’s new AI expansion plan will deploy 50,000 AMD Instinct MI450 chips across its cloud network starting in 2026. Photo: Oracle

Oracle Cloud Infrastructure announced plans to deploy 50,000 AMD Instinct MI450 graphics processors starting in the second half of 2026, marking one of the largest single expansions of AI computing power by a major cloud provider.

The move reflects how cloud companies are increasingly turning to AMD as an alternative to Nvidia’s dominant GPU lineup amid surging global demand for artificial intelligence infrastructure.

AMD shares rose about 3% in early trading following the announcement, while Oracle stock slipped slightly. The deployment will use AMD’s MI450 accelerators, introduced earlier this year, which represent the company’s first rack-scale AI system.

The design allows up to 72 chips to work in tandem as a single unit, providing the kind of high-bandwidth performance required for large-scale training and inferencing workloads.

Karan Batta, senior vice president of Oracle Cloud Infrastructure, said AMD’s chips will play a growing role in the company’s cloud portfolio. “We feel like customers are going to take up AMD very, very well – especially in the inferencing space,” he said during an interview. Batta added that AMD’s software stack is “critical” to enabling developers to train and deploy models efficiently on Oracle’s platform.

The partnership further strengthens the ties between Oracle, AMD, and OpenAI, which have deepened over the past several months. Earlier this year, OpenAI and AMD signed a multi-gigawatt compute deal to power future generations of large language models, potentially involving up to 160 million AMD shares if performance milestones are met.

The agreement complements OpenAI’s five-year, $300 billion cloud partnership with Oracle announced in September, positioning Oracle as a primary infrastructure provider for OpenAI’s rapidly growing workloads.

Oracle’s founder and chairman Larry Ellison is expected to outline further details of the company’s AI strategy at Oracle AI World 2025, emphasizing how Oracle plans to challenge Microsoft, Amazon, and Google in the race for AI cloud leadership. Analysts see the AMD partnership as part of a broader strategy to reduce reliance on Nvidia while leveraging Oracle’s deep enterprise data assets and software capabilities.

“The company must now prove that beyond capacity, it can capitalize on its massive underlying data and enterprise capabilities to add meaningful value to the AI wave,” said Daniel Newman, CEO of The Futurum Group.

OpenAI CEO Sam Altman has praised AMD’s progress, appearing alongside AMD CEO Lisa Su earlier this year to unveil the MI450 series. The collaboration highlights OpenAI’s multi-supplier approach, which also includes partnerships with Broadcom for custom AI chips and Nvidia for high-performance GPU clusters.

By 2026, Oracle’s deployment of AMD systems could become one of the largest dedicated AI infrastructures in the world – a cornerstone of the company’s effort to establish itself as a critical backbone for enterprise-scale AI computing.