Samsung Unveils Galaxy S26 Series With Proactive Galaxy AI

Samsung introduces the Galaxy S26, S26+ and S26 Ultra, featuring advanced Galaxy AI, a new Privacy Display, and upgraded performance powered by Snapdragon 8 Elite Gen 5 for Galaxy.

By Daniel Mercer Edited by Maria Konash Published:
Samsung Unveils Galaxy S26 Series With Proactive Galaxy AI
Samsung unveils Galaxy S26 with proactive AI, Snapdragon 8 Elite Gen 5, and built-in Privacy Display. Photo: Samsung

Samsung Electronics has officially unveiled the Galaxy S26 series, introducing what it calls its most proactive and adaptive Galaxy AI experiences yet. The new lineup – Galaxy S26, S26+ and S26 Ultra – is designed to simplify everyday tasks by handling complex processes in the background, allowing users to focus on results rather than the technology itself.

As Samsung’s third-generation AI smartphones, the Galaxy S26 devices aim to reduce friction across common activities, from planning schedules and searching for information to capturing and refining content.

Performance Built for AI

The Galaxy S26 series is powered by Samsung’s most advanced hardware platform to date, led by the customised Snapdragon® 8 Elite Gen 5 Mobile Platform for Galaxy in the S26 Ultra.

Samsung says the Ultra delivers:

  • Up to 19% faster CPU performance
  • A 39% improvement in NPU performance for always-on Galaxy AI features
  • A 24% GPU boost for smoother visuals and gameplay

To sustain performance, the S26 Ultra introduces a redesigned vapor chamber and enhanced thermal interface materials for better heat dissipation during gaming, multitasking, and video capture. Super-Fast Charging 3.0 enables up to 75% battery in around 30 minutes.

Samsung’s proprietary ProScaler and upgraded mobile Digital Natural Image engine (mDNIe) further enhance display sharpness, colour accuracy, and image clarity.

Industry-First Built-In Privacy Display

A standout feature of the Galaxy S26 Ultra is the mobile industry’s first built-in Privacy Display. Unlike traditional stick-on privacy films, Samsung’s integrated solution dynamically limits side-angle visibility while maintaining full brightness and clarity for the user.

Privacy Display can automatically activate when entering PINs, opening selected apps, or viewing sensitive content. Users can also enable Partial Screen Privacy for notifications or Maximum Privacy Protection for enhanced discretion.

This hardware-level privacy feature reinforces Samsung’s broader security strategy, which includes Samsung Knox, Knox Vault, post-quantum cryptography protections, and expanded AI-driven Privacy Alerts.

Galaxy’s Most Advanced Camera System

The Galaxy S26 series also introduces Samsung’s most advanced camera system to date.

On the S26 Ultra:

  • Wider apertures improve low-light photography
  • Enhanced Nightography Video keeps footage vibrant in dim environments
  • Upgraded Super Steady adds horizontal lock for stable framing
  • Support for APV, a new professional-grade video codec for high-quality compression

AI enhancements now extend to the selfie camera through an improved AI ISP, delivering more natural skin tones in mixed lighting.

Editing tools have also been expanded. The upgraded Photo Assist suite allows users to describe edits in natural language – such as changing a scene from day to night, restoring missing objects, or even modifying outfits in photos. Creative Studio centralises design tools for generating stickers, wallpapers, and invitations from sketches or prompts.

More Proactive Galaxy AI

Galaxy AI on the S26 series becomes more context-aware and anticipatory.

Features include:

  • Now Nudge, which suggests relevant content – such as surfacing trip photos when a friend asks for them
  • A more personalised Now Brief widget
  • Enhanced Circle to Search with Google, now supporting multi-object recognition
  • Integration with Bixby, Gemini, and Perplexity agents for natural, multi-step task completion
  • Users can request actions like booking a ride or coordinating across apps with a single voice prompt, as AI agents handle the process in the background.

Security for the AI Era

As AI becomes more deeply embedded into mobile workflows, Samsung is emphasising layered protection. The Galaxy S26 series includes:

  • AI-powered Call Screening
  • Real-time Privacy Alerts for sensitive data access
  • Private Album within Gallery
  • PQC-enabled encryption for eSIM transfers via Knox Matrix
  • Seven years of security updates

Samsung says these features combine hardware-level and software-based safeguards to provide transparency and control over how personal data is used.

Availability

The Galaxy S26, S26+ and S26 Ultra will be available for pre-order from 26 February to 19 March 2026. Recommended retail pricing starts at:

  • Galaxy S26 Ultra 256GB: R30,999
  • Galaxy S26+ 256GB: R25,999
  • Galaxy S26 256GB: R20,999

The series will be offered in Cobalt Violet, White, Black, and Sky Blue.

With the Galaxy S26 lineup, Samsung is positioning AI not as a feature users activate, but as an invisible system that works proactively in the background – marking its most ambitious step yet toward agentic, privacy-aware mobile computing.

AI & Machine Learning, Consumer Tech, News

Google Launches Nano Banana 2: Lightning-Fast, High-Fidelity Image Generation

Nano Banana 2 delivers ultra-fast, photorealistic image generation with improved world knowledge, instruction following, and precision text rendering. Available across Gemini app, Search, AI Studio, Flow, and Google Cloud.

By Daniel Mercer Edited by Maria Konash Published:
Google Launches Nano Banana 2: Lightning-Fast, High-Fidelity Image Generation
Google launches Nano Banana 2, a next-gen AI image model offering advanced knowledge and fast visual generation for creators. Photo: Google

Google today announced the launch of Nano Banana 2 (Gemini 3.1 Flash Image), the latest version of its Nano Banana AI image model. Following the viral success of Nano Banana and the advanced Nano Banana Pro, the new release combines rapid image generation with enhanced reasoning and real-world knowledge. It aims to make high-end creative tools accessible to a wider audience.

Intelligence and Visual Quality at Flash Speed

Nano Banana 2 integrates the speed of Gemini Flash with the advanced capabilities of Nano Banana Pro. The model draws from Gemini’s real-world knowledge base and supplements this with real-time web information and images. This enables users to generate accurate visuals for subjects ranging from infographics to diagrams and data visualizations.

The model also improves text handling within images. Creators can produce legible marketing copy, greeting cards, and multilingual text, making localization and global content creation easier.

Enhanced Creative Control

Nano Banana 2 balances speed and visual fidelity with several upgrades:

  • Subject Consistency: The model maintains character resemblance for up to five figures and fidelity for 14 objects in a single workflow, supporting narrative continuity.
  • Precise Instruction Following: Complex prompts are interpreted with higher accuracy, capturing nuanced creative intentions.
  • Production-Ready Specs: Users can select resolutions from 512px up to 4K and various aspect ratios, ensuring clarity across social media, presentations, and large-scale visuals.
  • Visual Fidelity Upgrade: Enhanced lighting, textures, and detail create photorealistic imagery without compromising generation speed.

Availability Across Google Platforms

Nano Banana 2 is now available across multiple Google products:

  • Gemini App: Replaces Nano Banana Pro on Fast, Thinking, and Pro models; Pro remains for specialized tasks.
  • Search and Lens: Available in AI Mode through the Google app, mobile, and desktop browsers, expanding to 141 countries and eight additional languages.
  • AI Studio, API, and Google Cloud: Preview access via Gemini API and Vertex AI.
  • Flow and Ads: Default image generation model in Flow and integrated into Ads for campaign suggestions.

Provenance and Verification

Google continues to enhance generative content tracking. Nano Banana 2 incorporates SynthID technology alongside interoperable C2PA Content Credentials, allowing users to verify AI-generated images. Since November, SynthID has been used over 20 million times, with C2PA verification planned for future Gemini app updates.

By combining rapid generation, high fidelity, and advanced reasoning, Nano Banana 2 provides creators, businesses, and developers with a versatile tool for precise, professional, and scalable AI-generated imagery.

Hyperscale AI Spending Drives Nvidia to Historic $68.1B Quarter

Nvidia reports record Q4 revenue of $68.1B and net income of $43B, driven by strong AI data center demand and continued hyperscale investment.

By Samantha Reed Edited by Maria Konash Published:
Hyperscale AI Spending Drives Nvidia to Historic $68.1B Quarter
Nvidia reports $68.1B in quarterly revenue, up 73% YoY. Photo: Mariia Shalabaieva / Unsplash

Nvidia reported record fourth-quarter revenue of $68.1 billion, marking a 73% year-over-year increase, as global demand for artificial intelligence infrastructure continued to surge.

Net income climbed 94% from a year earlier to $43 billion, significantly exceeding analyst expectations and reinforcing Nvidia’s dominant position in AI hardware.

The results extend Nvidia’s streak of outsized quarterly beats during the ongoing AI investment cycle and underscore the company’s central role in powering large-scale AI deployments.

AI Data Center Demand Drives Growth

Nvidia’s data center segment once again accounted for the majority of revenue growth. Demand for advanced GPUs used to train and deploy large language models remained strong, fueled by continued capital expenditure from hyperscale cloud providers and enterprise customers.

Major cloud platforms and AI startups are still expanding high-performance computing capacity at scale, and management emphasized that AI infrastructure spending is still in its early phases.

Gross margins remained elevated, reflecting Nvidia’s strong pricing power and favorable product mix. Analysts note that few companies at Nvidia’s scale are simultaneously sustaining rapid revenue growth and expanding profitability.

Beyond chips, Nvidia also highlighted continued momentum in networking and AI software, reinforcing its strategy of delivering integrated hardware, systems, and software as a unified platform.

Investor Reaction and Market Impact

Investors responded positively to the earnings report, viewing it as further confirmation that AI spending has not meaningfully slowed despite broader market volatility.

The earnings beat supported not only Nvidia shares but also semiconductor peers and the broader AI supply chain ecosystem.

With quarterly profits reaching $43 billion, Nvidia’s scale is increasingly unmatched within the technology sector. The numbers illustrate how deeply embedded the company has become in the global AI buildout.

Still, expectations remain high. Analysts caution that valuation levels across AI-related stocks are elevated, and Nvidia faces growing scrutiny around the sustainability of its growth rates and potential competitive pressure.

For now, however, the company’s results offer tangible evidence that AI demand remains structurally strong. As long as hyperscalers and enterprises continue investing heavily in compute infrastructure, Nvidia appears positioned to remain one of the primary beneficiaries of the AI capital expenditure cycle.

The quarter reinforces a broader market narrative: while debate over an AI bubble persists, Nvidia’s financial performance continues to demonstrate real revenue, real profits, and sustained demand at unprecedented scale.

AI & Machine Learning, Cloud & Infrastructure, News

ChatGPT Helps Physicists Derive Long-Sought Gluon Interaction Formula

Researchers report that ChatGPT-5.2 Pro helped simplify and generalize a complex gluon scattering formula, with results presented at the AAAS annual meeting.

By Laura Bennett Edited by Maria Konash Published:
ChatGPT Helps Physicists Derive Long-Sought Gluon Interaction Formula
ChatGPT contributed to uncovering a decades-old formula for gluon interactions. Photo: FlyD / Unsplash

An unlikely contributor has entered the world of high-energy theoretical physics: ChatGPT.

For decades, physicists believed that a particular interaction involving gluons – the massless particles that carry the strong nuclear force – could never occur. Now, researchers say OpenAI’s latest public model, ChatGPT-5.2 Pro, helped demonstrate that the process is in fact possible, deep within the complex internal structure of protons and neutrons.

The findings were presented last week at the annual meeting of the American Association for the Advancement of Science(AAAS), which publishes Science.

“The ideas are not revolutionary,” said Zvi Bern of the Mani L. Bhaumik Institute for Theoretical Physics at UCLA. “But what is revolutionary is that a machine can do this.”

A 40-Year Puzzle

Gluons bind quarks together to form protons and neutrons, and also bind protons and neutrons into atomic nuclei. The mathematics describing their interactions, known as scattering amplitudes, is notoriously complex.

In simple gluon collisions, physicists long believed that at least two particles had to possess negative helicity (a type of spin orientation). If only one gluon had negative helicity, the scattering amplitude was assumed to be zero , meaning the interaction could not happen.

About a year ago, three theorists identified a loophole: a single negative-helicity gluon might interact with positive-helicity gluons if all particles were traveling in roughly the same direction. Proving it, however, required navigating pages of unwieldy equations.

Andrew Strominger of Harvard University and his collaborators initially thought the calculation would take weeks. Instead, it stretched on for months. Alfredo Guevara of the Institute for Advanced Study eventually discovered a pattern in the equations, but generalizing the result for any number of gluons produced an expression dozens of terms long – too cumbersome to use.

The team suspected a clean, elegant formula was hidden inside the mess. They just couldn’t extract it.

Enter ChatGPT

At the same time, Alex Lupsasca of Vanderbilt University had joined OpenAI’s newly launched OpenAI for Science initiative. After reconnecting with Strominger, his former adviser, he suggested using ChatGPT as a test case.

The researchers fed the complex four-gluon expression into ChatGPT-5.2 Pro. Within about 20 minutes, the model simplified it. They repeated the process for five gluons, then six. In one case, the AI reduced a 32-term expression into a compact product spanning a single line.

Finally, they asked it to generalize the formula for any number of gluons. The system responded within minutes with what it described as an “obvious” generalized expression.

Concerned about possible hallucinations, the team rigorously checked the result. They found no errors.

“All of a sudden, I felt like my machine turned from a machine into a live being,” Strominger said.

To further validate the result, the team submitted the generalized formula to an internal OpenAI research model under development, nicknamed “SuperChat.” After roughly 12 hours of processing, the internal model produced a detailed proof that passed human scrutiny.

A Paradigm Shift?

The paper, posted to arXiv on 12 February, quickly gained attention online and sparked surprise at the AAAS meeting.

“What the OpenAI agent was able to do is impressive,” said Aida El-Khadra of the University of Illinois Urbana-Champaign.

The researchers believe the development could mark a turning point in how theoretical physics is conducted. Guevara suggested AI might soon become as integral to physics as it has to programming – handling routine derivations, checking errors, and accelerating research workflows.

The broader physics community has responded with cautious optimism. While many see AI as a powerful assistant for verification, drafting, and cross-disciplinary synthesis, concerns remain about transparency, training of graduate students, and overreliance on automated systems.

Still, few believe scientists are at risk of being replaced.

“None of this feels to me like scientists will be replaced,” El-Khadra said.

Lupsasca is already looking ahead. He hopes similar techniques could be applied to gravitons – hypothetical quantum particles that mediate gravity – and perhaps even help tackle one of physics’ greatest unsolved problems: reconciling quantum mechanics with gravity.

For now, the result stands as a striking milestone: after 40 years of near-intractable algebra, a large language model helped physicists uncover an elegant formula describing interactions among fundamental massless particles and proved that AI can meaningfully contribute to front-line theoretical research.

AI & Machine Learning, News, Research & Innovation

Perplexity Launches $200/Month AI Agent Bundle With ChatGPT, Gemini, and Grok

Perplexity unveils Perplexity Computer, a general-purpose AI system that orchestrates multiple frontier models to run complex workflows autonomously.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Perplexity Launches $200/Month AI Agent Bundle With ChatGPT, Gemini, and Grok
Perplexity unveils Perplexity Computer, an AI system that coordinates long-running, cross-platform workflows using multiple models. Photo: Perplexity

Perplexity AI has introduced Perplexity Computer, a new system designed to unify frontier AI models, including ChatGPT, Gemini, and Grok, into a single, general-purpose “digital worker” capable of creating and executing entire workflows.

The company argues that while AI models are becoming increasingly powerful, the interfaces built around them are now the bottleneck. Perplexity Computer aims to remove that limitation by moving beyond chat interfaces and task-based agents toward a system that can autonomously design, coordinate, and run multi-step processes for hours — or even months.

From Answers to Autonomous Workflows

Unlike traditional chat-based AI systems that generate responses, Perplexity Computer begins with an outcome. Users describe a goal, and the system decomposes it into tasks and subtasks, automatically spawning specialized sub-agents to execute them.

These sub-agents can perform web research, generate documents, process data, make API calls to connected services, and even write code. Tasks are coordinated asynchronously: one agent can gather data while another drafts a report. If problems arise, the system creates additional sub-agents to troubleshoot — whether that means researching documentation, locating API keys, building small apps, or escalating only when necessary.

Each task runs inside an isolated compute environment with access to a real filesystem, browser, and tool integrations, creating what Perplexity describes as a secure, universal harness for advanced AI work.

Multi-Model by Design

Perplexity emphasizes that its system is model-agnostic and built around intelligent multi-model orchestration. Rather than relying on a single foundation model, Perplexity Computer dynamically assigns tasks to the most suitable AI system.

As of launch, the platform runs Opus 4.6 as its core reasoning engine and deploys other frontier models depending on the job — Gemini for deep research and sub-agent creation, Nano Banana for image generation, Veo 3.1 for video, Grok for lightweight speed-focused tasks, and ChatGPT 5.2 for long-context recall and broad search.

The company argues that, contrary to claims that AI models are commoditizing, they are in fact specializing. In that environment, the most powerful system is not a single model, but an orchestrator that intelligently combines them.

A Broader Evolution

Perplexity frames the launch as a continuation of its broader mission to “power the world’s curiosity.” Previous steps included Comet, described as an AI-native browser, and Comet Assistant, a personal AI agent. With deep research capabilities, persistent memory, and task management already in place, Perplexity Computer represents the next step: AI not just as an assistant, but as an operational system.

The company draws a historical parallel to 18th-century “computers” — human apprentices who performed complex mathematical calculations collaboratively. In that sense, Perplexity argues, the word has come full circle: AI is now the computer.

Perplexity Computer is available immediately to Perplexity Max subscribers at $200 per month, with Enterprise Max access expected soon.

Andrej Karpathy: “The Era of Manual Coding is Over”

Andrej Karpathy argues AI coding agents now fundamentally change programming workflows, enabling long-running autonomous tasks and redefining software engineering.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Andrej Karpathy: “The Era of Manual Coding is Over”
Software development is moving from coding by hand to AI-driven task orchestration. Photo: Joshua Reddekopp / Unsplash

Andrej Karpathy says programming has undergone a dramatic shift in just the past two months: not gradual progress, but a clear inflection point.

In a post on X, Karpathy argued that coding agents “basically didn’t work before December and basically work since.” According to him, recent model improvements in quality, long-term coherence, and persistence have made AI agents capable of powering through complex, multi-step tasks — far beyond what was possible only months ago.

A 30-Minute Weekend Project

Karpathy shared a personal example: over a weekend, he set up a local video analysis dashboard for his home cameras by giving an AI agent a single detailed instruction in plain English. The request included logging into his DGX Spark machine, configuring SSH keys, setting up vLLM, downloading and benchmarking Qwen3-VL, building a server endpoint for video inference, creating a web UI dashboard, testing everything, configuring systemd services, and generating a markdown report.

The agent reportedly worked autonomously for around 30 minutes, troubleshooting errors, researching solutions, writing and debugging code, deploying services, and returned with a completed system and documentation. Karpathy said he “didn’t touch anything.”

Just three months ago, he noted, the same work could have taken an entire weekend.

From Typing Code to Orchestrating Agents

Karpathy argues that programming is becoming “unrecognizable.” Instead of writing code line by line in an editor, the default workflow since the invention of computers , developers are increasingly spinning up AI agents, assigning tasks in natural language, and reviewing outputs.

The leverage, he says, comes from building higher-level orchestration systems: long-running agents equipped with tools, memory, and structured instructions that manage multiple coding instances in parallel. He describes this as “agentic engineering,” where the biggest opportunity lies in mastering abstraction layers and task decomposition.

Not Magic, But Disruptive

Karpathy is clear that the systems are not perfect. They require high-level direction, oversight, judgment, and iteration. They perform best on well-specified tasks with clear verification criteria. The skill, he suggests, is learning how to break problems into components that can be reliably delegated to agents while managing the edge cases.

Still, his conclusion is unequivocal: this is not business as usual in software development. And his view is increasingly echoed across the industry – for example, Spotify recently said its best developers haven’t written code in months thanks to generative AI tools, instead focusing on directing and reviewing AI-generated output as the company accelerates product development with internal systems and large language models.

If his assessment proves accurate, the role of the software engineer may be shifting from code writer to task architect,  from syntax to strategy, at a pace far faster than most anticipated.

AI & Machine Learning, News