Thinking Machines Lab Loses Founders to Meta

Two more founding members of Thinking Machines Lab have joined Meta, highlighting ongoing talent departures from the $12 billion AI startup.

By Maria Konash Published:
Thinking Machines Lab Loses Founders to Meta
Thinking Machines Lab loses two executives to Meta. Photo: engin akyurt / Unsplash

Thinking Machines Lab, the San Francisco-based AI startup founded by former OpenAI CTO Mira Murati, has lost two more founding members to Meta. The departures follow a pattern of recent exits by high-profile talent from the company to OpenAI, which raised $2 billion at a $12 billion valuation last year. The startup focuses on enabling developers to custom-build AI models and has been a target for recruitment by larger tech firms including Meta and OpenAI.

Key Founders Depart for Meta

Christian Gibson and Noah Shpak, previously listed as founding members on Thinking Machines Lab’s website, have joined Meta within the past few weeks, according to sources familiar with the matter. Gibson, a former OpenAI engineer, specializes in supercomputers for AI model training and contributed to the development of the first ChatGPT model. Shpak, an AI-focused engineer, previously worked at Character.AI and X.

The company has faced a string of high-profile departures over the past year. Co-founder Andrew Tulloch left for Meta last year, while CTO Barret Zoph and co-founder Luke Metz joined OpenAI last month. Other exits include Jolene Parish, a founding member specializing in security, along with two additional researchers.

Startup Continues to Attract Top Talent

Despite these departures, Thinking Machines Lab remains a hub for AI expertise. The company quietly hired Neal Wu, a programming Olympiad triple gold medalist, and Soumith Chintala, the creator of the open-source AI framework PyTorch, who now serves as the startup’s CTO. These hires reinforce the startup’s reputation for drawing elite AI engineers, even amid significant turnover.

The recent exodus underscores the competitive environment in AI talent acquisition, with major companies such as Meta and OpenAI actively recruiting experienced engineers and researchers. While Thinking Machines Lab continues its operations and maintains its development focus, the loss of multiple founding members raises questions about retention and the startup’s long-term stability in a rapidly evolving sector.

AI & Machine Learning, News, Startups & Investment

OpenAI Raises $110 Billion at $730 Billion Valuation

OpenAI secured $110 billion in new funding at a $730 billion pre-money valuation, backed by SoftBank, NVIDIA, and Amazon to expand AI infrastructure and global reach.

By Maria Konash Published:
OpenAI Raises $110 Billion at $730 Billion Valuation
OpenAI raises $110B from SoftBank, NVIDIA, and Amazon. Photo: Zac Wolff / Unsplash

OpenAI announced $110 billion in new investment at a $730 billion pre-money valuation, marking one of the largest private funding rounds in technology history. The round includes $30 billion each from SoftBank Group Corp and NVIDIA, and $50 billion from Amazon. Additional financial investors are expected to join as the round progresses.

The company said the funding will support rising global demand for artificial intelligence products across consumers, developers, and enterprises. OpenAI identified compute, distribution, and capital as the core requirements to scale access to its AI systems worldwide.

As part of the announcement, OpenAI signed a multi-year strategic partnership with Amazon and expanded its collaboration with NVIDIA to secure next-generation inference and training infrastructure.

Infrastructure Expansion and Strategic Partnerships

Under the NVIDIA agreement, OpenAI will utilize 3 gigawatts of dedicated inference capacity and 2 gigawatts of training capacity on Vera Rubin systems. This builds on Hopper and Blackwell systems already deployed across partners including Microsoft, Oracle Cloud Infrastructure, and CoreWeave. The expanded compute footprint is designed to accelerate both model training and real-time deployment at global scale.

The Amazon partnership focuses on accelerating AI adoption among enterprises, startups, and consumers. OpenAI said the collaboration strengthens its distribution channels and infrastructure capabilities while deepening integration across cloud environments.

Product Growth Across Consumer and Enterprise

The funding follows significant growth across OpenAI’s product portfolio. Codex, the company’s AI coding system, has seen weekly users more than triple since the start of the year to 1.6 million. The tool enables individuals to build and deploy software workflows that previously required larger engineering teams.

ChatGPT remains the company’s largest consumer-facing product, with more than 900 million weekly active users and over 50 million subscribers. OpenAI reported that January and February are on track to be the strongest months for new subscriber additions in its history. The company said product performance continues to improve with faster responses, greater reliability, and stronger safety systems as usage scales.

In the enterprise segment, more than nine million paying business users rely on ChatGPT for workplace applications. Organizations across sectors are deploying AI systems across engineering, support, finance, sales, and operations. OpenAI’s Frontier platform supports enterprise customers in building and managing AI-powered workflows.

Foundation Impact

The new valuation increases the value of the OpenAI Foundation’s stake in OpenAI Group to more than $180 billion. The company said the strengthened balance sheet will expand philanthropic capacity in areas including health research and AI resilience.

Chief Executive Sam Altman said the partnerships reflect a shared ambition to scale reliable and broadly useful AI systems globally. The funding positions OpenAI to expand infrastructure capacity and accelerate deployment as frontier AI moves into daily use.

Dog Vibe-Codes Video Games with a Little Help From Claude AI

A YouTuber built a system that lets his cavapoo trigger AI-generated video games by mashing a keyboard, using Anthropic’s Claude with custom guardrails and automated feedback tools.

By Samantha Reed Edited by AIstify Team Published: Updated:

A YouTuber has created an unusual AI experiment that turns random dog keystrokes into playable video games. Caleb Leak documented the project, which connects his nine-pound cavapoo, Momo, to an AI coding workflow powered by Claude Code from Anthropic.

The system combines consumer hardware, custom software, and structured AI prompting. Momo types on a Bluetooth keyboard connected to a Raspberry Pi 5. The keystrokes travel over a network to DogKeyboard, a lightweight Rust application that filters special keys and forwards usable input to Claude Code. After a preset amount of typing, a smart pet feeder dispenses a treat. A chime signals when the AI is ready for more input.

Leak says the technical challenge was not the keyboard interface but ensuring the AI interpreted nonsensical input as intentional creative direction. Claude, like many large language models, is designed to disregard accidental or meaningless strings. To address this, Leak engineered prompt instructions that framed the keyboard mashing as cryptic but meaningful design language.

In the system prompt, Claude is told that it is collaborating with an eccentric game designer who communicates through riddles and random-looking characters. The AI is instructed to interpret every input as a valid creative signal and update the game accordingly.

Strong guardrails and automated feedback tools, including screenshot capture, play-testing routines, scene linting, and shader validation, ensure that the output remains functional.

From Random Input to Playable Build

Leak reports that a typical game takes one to two hours from first keystroke to playable build. All projects are developed in Godot 4.6, with game logic written entirely in C#. The automation layer translates keyboard noise into structured development steps, allowing Claude to generate assets, mechanics, and scene updates iteratively.

One resulting title, described as an arcade-style action game, features retro visuals reminiscent of 1980s console design. The experiment demonstrates how AI coding agents can transform loosely defined input into structured software output when supported by well-designed system prompts and validation pipelines.

The project highlights a broader shift in how developers interact with generative AI tools. Rather than precise commands, structured context and feedback loops increasingly shape outcomes. By reframing meaningless input as creative intent, the workflow tests the boundaries of AI-assisted development.

While the experiment is largely playful, it underscores how modern AI coding systems can generate functional applications rapidly when provided with constraints, interpretation rules, and automated testing. Whether used for novelty or serious prototyping, the setup illustrates the expanding flexibility of AI development tools in unconventional environments.

AI & Machine Learning, News

Burger King Tests AI Headset System for Employees

Burger King is testing an AI-powered headset system called Patty, designed to monitor operations, assist employees, and track service patterns in real time.

By Samantha Reed Edited by Maria Konash Published:
Burger King Tests AI Headset System for Employees
Burger King trials ‘Patty’ in 500 U.S. outlets, using AI to optimize inventory, resolve issues, and boost customer service. Photo: Musmuliady Jahi / Unsplash

Burger King is testing an AI-powered headset system, Patty, across 500 U.S. restaurants. Developed by parent company Restaurant Brands International using OpenAI technology, the system provides real-time guidance to staff while monitoring operational and service metrics. The rollout represents one of the most ambitious AI experiments in fast food this year.

Operational Assistance Through AI

Patty connects to restaurant systems and communicates directly with employees through headsets. The AI flags low inventory, such as drink dispensers running out, and alerts managers when operational issues arise, including customer-reported incidents like messy restrooms.

Employees can interact with Patty to ask operational questions, including food preparation instructions, cleaning procedures, and digital menu management when ingredients are unavailable. The system integrates with Burger King’s broader BK Assistant platform and aims to reduce friction during busy shifts, providing managers with real-time insights rather than reactive reporting.

Hospitality Monitoring and Coaching

Beyond operational support, Patty tracks service behaviors. The AI recognizes key phrases like “welcome,” “please,” and “thank you,” allowing managers to monitor service patterns. Burger King emphasized the system is not intended to score employees or enforce scripts but to reinforce hospitality and provide actionable insights.

The company also stressed that technology will not replace human interaction. “Hospitality is fundamentally human,” a Burger King spokesperson said. “The role of this technology is to support our teams so they can stay present with guests.” Patty’s monitoring features, including emerging tone detection, remain under refinement.

AI in Fast Food

Burger King joins other chains experimenting with AI to reduce labor pressures and improve operational efficiency. Yum Brands has partnered with Nvidia for AI tools across KFC, Taco Bell, and Pizza Hut, while McDonald’s has explored AI in drive-thru operations with IBM and now works with Google on new systems.

Patty combines digital oversight with hands-on operational support, assisting staff while tracking service quality. How employees respond to the headset, whether as a helpful assistant or perceived supervisor, could shape the broader adoption of AI in fast-food operations.

By integrating AI directly into staff workflows, Burger King is testing the limits of real-time assistance and monitoring, balancing operational efficiency with the human touch in hospitality.

AI & Machine Learning, News

Google Launches Nano Banana 2: Lightning-Fast, High-Fidelity Image Generation

Nano Banana 2 delivers ultra-fast, photorealistic image generation with improved world knowledge, instruction following, and precision text rendering. Available across Gemini app, Search, AI Studio, Flow, and Google Cloud.

By Daniel Mercer Edited by Maria Konash Published:
Google Launches Nano Banana 2: Lightning-Fast, High-Fidelity Image Generation
Google launches Nano Banana 2, a next-gen AI image model offering advanced knowledge and fast visual generation for creators. Photo: Google

Google today announced the launch of Nano Banana 2 (Gemini 3.1 Flash Image), the latest version of its Nano Banana AI image model. Following the viral success of Nano Banana and the advanced Nano Banana Pro, the new release combines rapid image generation with enhanced reasoning and real-world knowledge. It aims to make high-end creative tools accessible to a wider audience.

Intelligence and Visual Quality at Flash Speed

Nano Banana 2 integrates the speed of Gemini Flash with the advanced capabilities of Nano Banana Pro. The model draws from Gemini’s real-world knowledge base and supplements this with real-time web information and images. This enables users to generate accurate visuals for subjects ranging from infographics to diagrams and data visualizations.

The model also improves text handling within images. Creators can produce legible marketing copy, greeting cards, and multilingual text, making localization and global content creation easier.

Enhanced Creative Control

Nano Banana 2 balances speed and visual fidelity with several upgrades:

  • Subject Consistency: The model maintains character resemblance for up to five figures and fidelity for 14 objects in a single workflow, supporting narrative continuity.
  • Precise Instruction Following: Complex prompts are interpreted with higher accuracy, capturing nuanced creative intentions.
  • Production-Ready Specs: Users can select resolutions from 512px up to 4K and various aspect ratios, ensuring clarity across social media, presentations, and large-scale visuals.
  • Visual Fidelity Upgrade: Enhanced lighting, textures, and detail create photorealistic imagery without compromising generation speed.

Availability Across Google Platforms

Nano Banana 2 is now available across multiple Google products:

  • Gemini App: Replaces Nano Banana Pro on Fast, Thinking, and Pro models; Pro remains for specialized tasks.
  • Search and Lens: Available in AI Mode through the Google app, mobile, and desktop browsers, expanding to 141 countries and eight additional languages.
  • AI Studio, API, and Google Cloud: Preview access via Gemini API and Vertex AI.
  • Flow and Ads: Default image generation model in Flow and integrated into Ads for campaign suggestions.

Provenance and Verification

Google continues to enhance generative content tracking. Nano Banana 2 incorporates SynthID technology alongside interoperable C2PA Content Credentials, allowing users to verify AI-generated images. Since November, SynthID has been used over 20 million times, with C2PA verification planned for future Gemini app updates.

By combining rapid generation, high fidelity, and advanced reasoning, Nano Banana 2 provides creators, businesses, and developers with a versatile tool for precise, professional, and scalable AI-generated imagery.

Hyperscale AI Spending Drives Nvidia to Historic $68.1B Quarter

Nvidia reports record Q4 revenue of $68.1B and net income of $43B, driven by strong AI data center demand and continued hyperscale investment.

By Samantha Reed Edited by Maria Konash Published:
Hyperscale AI Spending Drives Nvidia to Historic $68.1B Quarter
Nvidia reports $68.1B in quarterly revenue, up 73% YoY. Photo: Mariia Shalabaieva / Unsplash

Nvidia reported record fourth-quarter revenue of $68.1 billion, marking a 73% year-over-year increase, as global demand for artificial intelligence infrastructure continued to surge.

Net income climbed 94% from a year earlier to $43 billion, significantly exceeding analyst expectations and reinforcing Nvidia’s dominant position in AI hardware.

The results extend Nvidia’s streak of outsized quarterly beats during the ongoing AI investment cycle and underscore the company’s central role in powering large-scale AI deployments.

AI Data Center Demand Drives Growth

Nvidia’s data center segment once again accounted for the majority of revenue growth. Demand for advanced GPUs used to train and deploy large language models remained strong, fueled by continued capital expenditure from hyperscale cloud providers and enterprise customers.

Major cloud platforms and AI startups are still expanding high-performance computing capacity at scale, and management emphasized that AI infrastructure spending is still in its early phases.

Gross margins remained elevated, reflecting Nvidia’s strong pricing power and favorable product mix. Analysts note that few companies at Nvidia’s scale are simultaneously sustaining rapid revenue growth and expanding profitability.

Beyond chips, Nvidia also highlighted continued momentum in networking and AI software, reinforcing its strategy of delivering integrated hardware, systems, and software as a unified platform.

Investor Reaction and Market Impact

Investors responded positively to the earnings report, viewing it as further confirmation that AI spending has not meaningfully slowed despite broader market volatility.

The earnings beat supported not only Nvidia shares but also semiconductor peers and the broader AI supply chain ecosystem.

With quarterly profits reaching $43 billion, Nvidia’s scale is increasingly unmatched within the technology sector. The numbers illustrate how deeply embedded the company has become in the global AI buildout.

Still, expectations remain high. Analysts caution that valuation levels across AI-related stocks are elevated, and Nvidia faces growing scrutiny around the sustainability of its growth rates and potential competitive pressure.

For now, however, the company’s results offer tangible evidence that AI demand remains structurally strong. As long as hyperscalers and enterprises continue investing heavily in compute infrastructure, Nvidia appears positioned to remain one of the primary beneficiaries of the AI capital expenditure cycle.

The quarter reinforces a broader market narrative: while debate over an AI bubble persists, Nvidia’s financial performance continues to demonstrate real revenue, real profits, and sustained demand at unprecedented scale.

AI & Machine Learning, Cloud & Infrastructure, News