Google DeepMind Unveils AlphaGenome DNA Prediction Model

Google DeepMind has introduced AlphaGenome, a new AI model designed to analyze long DNA sequences and predict how genetic variations may influence disease development.

By Maria Konash Published: Updated:
Google DeepMind Unveils AlphaGenome DNA Prediction Model
Google DeepMind introduces AlphaGenome, an AI capable of interpreting long DNA sequences and assessing mutation effects. Photo: Sangharsh Lohakare / Unsplash

Google DeepMind has launched AlphaGenome, a new artificial intelligence model designed to analyze long stretches of DNA and predict how genetic changes may affect gene regulation and disease risk. The research-focused tool aims to help scientists better understand genome function, particularly in regions of DNA that do not directly code for proteins but play a critical regulatory role.

Human DNA is made up of millions of sequences built from four chemical bases, represented by the letters A, C, G, and T. While roughly two percent of the genome encodes proteins, the remaining 98 percent regulates when and how genes are activated. These non-coding regions influence processes such as gene expression, response to environmental signals, and RNA splicing. Many disease-associated mutations are found in these regulatory segments, where small changes can alter biological behavior without modifying proteins themselves.

AlphaGenome is designed to model this complexity. Using deep learning techniques inspired by how the brain processes information, the system can read up to one million DNA letters at single-letter resolution. This scale and precision exceed the capabilities of most previous genomic models, which typically analyze shorter sequences or focus primarily on protein-coding regions.

Predicting the Impact of Genetic Variants

According to DeepMind, AlphaGenome can estimate how subtle genetic variants influence gene activity and disrupt normal biological processes linked to diseases, including cancer. The model predicts how changes in DNA sequences affect regulatory elements that control gene behavior, offering insights into mechanisms that are difficult to observe experimentally.

In one demonstration, researchers applied AlphaGenome to a form of acute leukemia affecting immature T-cells. In some cases of this cancer, mutations do not alter proteins directly but instead increase or decrease the activity of nearby genes. AlphaGenome compared normal and mutated DNA sequences and predicted the likelihood that specific variants would raise gene activity, a signal often associated with uncontrolled cell growth.

The tool is currently available free of charge for non-commercial research use and is not intended for clinical diagnosis or treatment. DeepMind said it is designed as a scientific aid rather than a medical product, allowing researchers to test hypotheses before conducting laboratory experiments.

Research Potential and Limitations

Researchers see AlphaGenome as a virtual laboratory tool that could reduce the cost and time required for early-stage biological research. In molecular biology, it may help scientists explore how regulatory DNA functions without relying solely on physical experiments. In biotechnology, the model could assist in designing genetic therapies or improving molecules that target specific tissues.

External experts have described the model as a significant technical advance. Robert Goldstone, head of genomics at the Francis Crick Institute, said AlphaGenome’s resolution marks a shift from theoretical exploration to practical research utility, enabling systematic study of complex disease mechanisms.

However, scientists caution that the model’s performance depends heavily on the quality of its training data. Ben Lehner of the Wellcome Sanger Institute noted that many biological datasets remain small and inconsistently structured, limiting how effectively AI systems can learn from them. Generating large, standardized datasets remains a major challenge for the next generation of genomic AI tools.

DeepMind said AlphaGenome is intended as a foundational research resource that can evolve alongside improvements in data availability, helping accelerate biological discovery and the development of new treatments. The model also reflects the company’s broader push to apply advanced AI systems to complex, real-world domains.

Alongside AlphaGenome, Google DeepMind has recently begun rolling out Project Genie, an experimental prototype that allows users to create and explore AI-generated worlds powered by its Genie 3 world model—highlighting how the company is extending frontier AI research beyond language and perception into both scientific discovery and interactive simulation.

AI & Machine Learning, News, Research & Innovation

Google DeepMind’s Project Genie Lets Users Explore AI-Generated Worlds

Google DeepMind has begun rolling out Project Genie, an experimental interactive prototype that lets users create and explore AI-generated worlds powered by its Genie 3 world model.

By Maria Konash Published: Updated:

Google DeepMind has started rolling out access to Project Genie, an experimental interactive prototype that allows users to create, explore, and remix AI-generated worlds. The tool is powered by Genie 3, the company’s general-purpose world model, and is available initially to Google AI Ultra subscribers in the United States aged 18 and over.

The launch follows a limited preview of Genie 3 shared with trusted testers in August. Those early users created a wide range of interactive environments and identified new applications for the technology, prompting DeepMind to broaden access through a dedicated prototype focused on immersive world creation.

World models are designed to simulate environments by predicting how they evolve and how actions affect them. While DeepMind has previously built agents for closed systems such as Chess and Go, the company views general-purpose world models as a key step toward artificial general intelligence. Genie 3 differs from earlier approaches by generating environments dynamically in real time, rather than relying on static scenes or pre-rendered paths.

Interactive World Creation

Project Genie is delivered as a web-based prototype and combines Genie 3 with other Google AI systems, including Gemini and Nano Banana Pro. The experience centers on three core capabilities: world sketching, world exploration, and world remixing.

Users can begin by prompting with text, generated images, or uploaded visuals to define a setting, character, and mode of movement, such as walking, flying, or driving. For more precise control, Nano Banana Pro allows users to preview and adjust the visual structure of a world before entering it, as well as choose first-person or third-person perspectives.

Once inside, the environment is fully navigable. As users move, the system generates new terrain and scenes on the fly, responding to direction, camera changes, and interactions. Existing worlds can also be remixed by building on original prompts, and users can browse curated examples or randomized environments for inspiration. Completed sessions can be exported as short videos.

DeepMind said the prototype reflects progress in consistency and physical simulation, enabling applications across fields such as robotics research, animation, fictional storytelling, and the exploration of real or historical locations.

Limitations and Responsible Use

Project Genie remains an early-stage research prototype and includes several constraints. Generated worlds may not always match prompts or real-world physics, character control can be inconsistent, and individual generations are limited to 60 seconds. Some features previously discussed for Genie 3, such as prompt-driven events that alter environments mid-exploration, are not yet available.

DeepMind emphasized that Project Genie is being released through Google Labs to better understand how people use world models and where improvements are needed. The company said feedback from users of its most advanced AI tier will inform future development.

Access to Project Genie is rolling out gradually to U.S.-based Google AI Ultra subscribers, with plans to expand to additional regions over time. DeepMind said its longer-term goal is to make world model technology more broadly accessible as capabilities mature and reliability improves.

AI & Machine Learning, Consumer Tech, News

LinkedIn Adds Verified AI Skill Certificates to Profiles

LinkedIn is rolling out verified AI skill certifications that let users showcase real-world proficiency with popular AI tools, based on ongoing product usage rather than tests or self-declared skills.

By Maria Konash Published: Updated:
LinkedIn Adds Verified AI Skill Certificates to Profiles
LinkedIn debuts AI skill certifications tied to real-world tool usage. Photo: Zulfugar Karimov / Unsplash

LinkedIn has introduced a new feature that allows users to display verified AI skill certifications on their profiles, signaling a shift away from self-reported skills and short-form tests toward proof based on real-world usage. The update is part of a broader effort by the Microsoft-owned professional networking platform to make profiles more reflective of applied, in-demand capabilities.

The company said the certifications will be issued through partnerships with AI-first software platforms, starting with Lovable, Relay.app, and Replit. Qualified users can link their accounts on those services to LinkedIn, where certificates reflecting their level of proficiency will appear automatically. Additional partners, including Gamma, GitHub, Zapier, and Descript, are expected to join the program in the coming months.

Unlike traditional certifications that rely on exams or one-time assessments, LinkedIn’s model is based on continuous evaluation. Partner platforms assess how users work within their products, analyzing usage patterns, outcomes, and overall sophistication over time. Once a user meets a platform’s internal threshold for proficiency, the verified skill badge is added to their LinkedIn profile.

Pat Whelan, head of career products at LinkedIn, said the goal is to provide hiring managers with a more reliable signal of capability. The certifications are also designed to feed into LinkedIn’s own hiring and recruiting tools, including AI-driven candidate matching.

Proof Through Usage, Not Tests

LinkedIn said the exact criteria for proficiency will vary by partner and has not disclosed benchmarks or minimum usage requirements. The company said this flexibility allows product makers, rather than LinkedIn, to define what meaningful expertise looks like for their tools. Experience gained through side projects or independent work will count toward certification, not just usage in a formal job setting.

Hari Srinivasan, LinkedIn’s vice president of product, described verified skills as an extension of the platform’s broader trust initiatives. LinkedIn’s identity verification system has been adopted by more than 100 million users, and the company views verified AI skills as an additional layer of credibility for both job seekers and employers.

The move reflects changing hiring expectations as AI tools become embedded across roles beyond software engineering. Employers are increasingly seeking candidates who can demonstrate practical experience with modern tools rather than familiarity in name only.

Rising Demand for AI Skills

The rollout comes amid strong growth in demand for AI-related skills across industries. An edX report published last year found that job postings requiring AI capabilities doubled over a 12-month period. Data from Indeed’s Hiring Lab showed that by the end of 2025, more than four percent of U.S. job listings referenced AI skills, with growing demand in fields such as finance, marketing, and operations.

By anchoring certifications to hands-on tools such as Replit and GitHub, LinkedIn is promoting a more applied definition of AI literacy. The approach may help employers cut through inflated skill claims, but it also raises questions about transparency, consistency, and how disputes over automated assessments will be handled as the program scales.

For now, LinkedIn is betting that verified proof of work will carry more weight than endorsements or buzzwords, as AI tools become a core requirement in an increasingly competitive job market.

The feature also fits into a broader expansion of AI across the platform, including the recent launch of a natural language–based people search tool that lets users find relevant professionals by describing who they are looking for rather than relying on filters or job titles. Together, these updates underscore LinkedIn’s effort to make profiles and connections more dynamic, skill-driven, and useful in an increasingly AI-shaped job market.

AI & Machine Learning, News

Amazon in Talks for Up to $50 Billion OpenAI Investment

Amazon is in early discussions to invest tens of billions of dollars in OpenAI, a move that could deepen its position in the global AI race and make it the startup’s largest new backer.

By Maria Konash Published: Updated:
Amazon in Talks for Up to $50 Billion OpenAI Investment
Amazon may invest as much as $50 billion in OpenAI, potentially becoming its top backer. Photo: Rubaitul Azad / Unsplash

Amazon is in early-stage talks to invest tens of billions of dollars in OpenAI, with the figure potentially reaching as high as $50 billion, according to a source familiar with the discussions. If completed, the deal would rank among the largest single investments ever made in an artificial intelligence company.

The talks are still preliminary and the final amount has not been determined, the source said. Amazon declined to comment, while OpenAI did not immediately respond to requests for comment. The Wall Street Journal previously reported that Amazon Chief Executive Officer Andy Jassy is leading negotiations with OpenAI CEO Sam Altman.

The potential investment comes as major technology companies and global investors race to strengthen ties with OpenAI, which is spending aggressively on data centers and computing infrastructure. As reported earlier, OpenAI is seeking to raise up to $100 billion in funding, a round that could value the company at about $830 billion. The startup is also laying the groundwork for a future initial public offering that could value it at up to $1 trillion, according to separate reporting.

Big Tech Scramble for AI Exposure

SoftBank Group is among the investors in talks with OpenAI and is discussing a potential investment of up to an additional $30 billion, reported. Nvidia, Amazon, and Microsoft are also exploring participation in the fundraising round. Nvidia is said to be considering an investment of up to $30 billion, while Microsoft, a long-standing OpenAI partner, is in talks to invest less than $10 billion.

An Amazon investment of up to $50 billion would make it the largest contributor to the current round, surpassing other potential backers. The move would deepen Amazon’s exposure to generative AI at a time when competition among cloud providers and model developers is intensifying.

OpenAI has been expanding its computing partnerships to support the growing demand for its models. Earlier this month, the company signed a $10 billion computing deal with Cerebras, a challenger to Nvidia in AI hardware. The startup’s rapid infrastructure build-out has made access to capital a strategic priority.

Strategic Tensions and Portfolio Overlap

Amazon already has a significant stake in Anthropic, a direct rival to OpenAI. The company has invested about $8 billion in the startup, which was recently valued at $183 billion. Anthropic has gained traction among enterprise customers and has forecast that its annualized revenue run rate could more than double, and potentially nearly triple, to around $26 billion in 2026.

The parallel involvement in both OpenAI and Anthropic underscores Amazon’s strategy of maintaining broad exposure across the AI ecosystem rather than backing a single model provider. Amazon Web Services has positioned itself as a neutral infrastructure layer for competing AI developers, while also seeking to benefit financially from their growth.

If the talks result in a deal, Amazon’s investment would further reshape the balance of power among OpenAI’s backers, reinforcing the role of Big Tech companies as both customers and financiers of the world’s most influential AI developers.

OpenAI Explores Human-Only Social Network With Biometric Identity

OpenAI is developing an early-stage social network designed to limit bots by verifying real users, potentially using biometric identity tools such as Face ID or iris scanning.

By Maria Konash Published: Updated:
OpenAI Explores Human-Only Social Network With Biometric Identity
OpenAI is working on a social network that restricts bots by verifying users, possibly with biometric tools. Photo: Gavin Phillips / Unsplash

OpenAI is exploring the development of a social network centered on a strict “real humans only” model, according to people familiar with the project. The initiative, still in its earliest stages, is designed to address the widespread bot activity that has distorted engagement and discourse across major platforms, particularly the service formerly known as Twitter.

Sources said the project is being developed by a small internal team of fewer than 10 people. The concept under consideration would require users to verify their identity as a real person, potentially through biometric authentication. Options discussed include Apple’s Face ID or the World Orb, a biometric device that scans a user’s iris to generate a unique identifier. World is operated by Tools for Humanity, a company founded and chaired by OpenAI CEO Sam Altman.

If implemented, biometric verification would mark a significant departure from how existing social networks authenticate users. Platforms such as Facebook and LinkedIn rely primarily on phone numbers, email addresses, and behavioral signals to confirm identity. None currently require biometric data to establish that an account represents a real person. Privacy advocates have raised concerns about such approaches, warning that biometric identifiers, unlike passwords, cannot be changed if compromised.

Addressing Bots and Synthetic Engagement

The effort reflects growing frustration within the tech industry over the scale of automated and AI-generated accounts on social platforms. Bot networks have long been used to manipulate cryptocurrency markets, amplify misinformation, and generate spam. These issues have been especially visible on X, where moderation and trust and safety staffing were sharply reduced following Elon Musk’s acquisition of the company.

Altman has publicly criticized the rise of AI-driven accounts. In recent posts, he has said that online conversations increasingly feel artificial and referenced the “dead internet theory,” which argues that much of today’s online activity is generated by non-human actors.

OpenAI declined to comment on the project. Media reports earlier indicated that the company was experimenting with social networking features, but no public product or launch timeline has been announced. Sources cautioned that the concept could change substantially or be abandoned before any public release.

Strategic Fit and Competitive Landscape

It remains unclear how a social network would integrate with OpenAI’s existing products, which include ChatGPT and the AI video tool Sora. People familiar with the project said the platform could allow users to generate and share AI-created content such as images or videos. Competing services are already moving in that direction. Instagram, owned by Meta Platforms, allows in-app AI image generation and reported 3 billion monthly active users as of late 2025.

OpenAI would face intense competition if it entered the social media market. X, Instagram, TikTok, and Meta’s Threads all command large user bases, while newer platforms such as Bluesky have attracted tens of millions of users seeking alternatives. Industry executives have warned that feeds across platforms are increasingly filled with synthetic content, raising questions about authenticity and trust.

OpenAI has demonstrated an ability to build highly viral consumer products. ChatGPT reached 100 million users within two months of launch and has since grown to more than 800 million users. Sora surpassed 1 million downloads in under a week. Still, launching a social network would represent a new strategic direction, placing OpenAI in direct competition with established consumer internet companies while testing whether strict human verification can meaningfully improve online discourse.

Mistral AI Launches Vibe 2.0 Coding Agent

Mistral AI released the general availability of Mistral Vibe 2.0, upgrading its terminal-based coding agent as it shifts developer tools from testing to paid enterprise products.

By Maria Konash Published: Updated:
Mistral AI Launches Vibe 2.0 Coding Agent
Mistral AI debuts Vibe 2.0, an enterprise AI coding agent powered by Devstral 2. Photo: Fotis Fotopoulos / Unsplash

Mistral AI announced the general availability of Mistral Vibe 2.0, a major upgrade to its terminal-based AI coding agent, marking the company’s most significant push yet into the competitive market for AI-assisted software development. The release moves the product out of a free testing phase and into Mistral’s paid subscription plans, signaling a shift toward monetizing its developer tools.

The Paris-based startup has positioned itself as Europe’s leading challenger to U.S. AI companies such as OpenAI, Anthropic, and Google. The launch comes days after Chief Executive Officer Arthur Mensch said the company expects to surpass €1 billion in revenue by the end of 2026, a milestone that would reinforce its role as Europe’s most prominent AI firm despite remaining far smaller than its American peers.

Vibe 2.0 builds on earlier releases of Mistral’s Devstral 2 model and the first version of the Vibe command-line interface, which were previously offered for free. Cofounder Timothée Lacroix said the company has now finalized the product and bundled it with paid Le Chat subscriptions, reflecting growing demand from enterprise customers.

Focus on Enterprise Code and Customization

Mistral is targeting a key weakness in many AI coding assistants: limited performance on large, proprietary enterprise code bases. According to Lacroix, legacy systems often rely on internal libraries and domain-specific languages that general-purpose models trained on public repositories struggle to understand. Vibe 2.0 is designed to address that gap through deeper customization to customer code and intellectual property.

The updated CLI introduces custom subagents for specialized tasks such as deployment scripts and code reviews, multi-choice clarification prompts to reduce unintended changes, and slash-command workflows for common development actions. Unified agent modes allow teams to configure permissions and tools for different contexts, while continuous updates through the command line remove the need for manual version management.

Smaller Models, Paid Access

Vibe 2.0 is powered by the Devstral 2 model family, which emphasizes efficiency over scale. The main model has 123 billion parameters and achieved 72.2 percent on the SWE-bench Verified benchmark. A smaller 24 billion-parameter version can run on consumer hardware, including laptops, supporting on-device or on-premises use.

Mistral said dense model architectures make deployment simpler for organizations that want to keep sensitive code on their own infrastructure. This approach appeals to regulated industries such as finance, healthcare, and defense, where data control and ownership are critical.

The Le Chat Pro plan costs $14.99 per month, while the Team plan is priced at $24.99 per seat, adding administrative controls and priority support. Devstral 2 access is now billed at $0.40 per million input tokens and $2.00 per million output tokens.

AI & Machine Learning, Enterprise Tech, News

Amazon to Cut 16,000 Corporate Jobs Amid AI Investments

Amazon announced plans to cut roughly 16,000 corporate roles, the second major reduction since October, as it invests heavily in artificial intelligence and organizational efficiency.

By Maria Konash Published: Updated:
Amazon to Cut 16,000 Corporate Jobs Amid AI Investments
Amazon to cut 16,000 corporate positions, prioritizing AI and operational efficiency, but continues targeted hiring. Photo: BoliviaInteligente / Unsplash

Amazon on Wednesday said it will reduce its corporate workforce by approximately 16,000 jobs, marking the company’s second major round of layoffs since October. The cuts are part of a broader effort to streamline operations, reduce management layers, and remove bureaucracy while accelerating investments in artificial intelligence.

The company’s senior vice president of people experience and technology, Beth Galetti, said in a blog post that the layoffs aim to strengthen ownership, speed, and capacity across teams. Employees affected in the U.S. will generally have 90 days to apply for other internal positions, while those unable or unwilling to transition will receive severance, outplacement support, and applicable benefits.

“This is not the start of a new rhythm of layoffs,” Galetti said, adding that every team will continue to evaluate its structure and adjust as needed.

Continued Workforce Adjustments

The new reduction follows 14,000 corporate layoffs in October and comes as Amazon seeks additional efficiency gains across its roughly 350,000 corporate and tech employees. Combined with prior cuts, the company has eliminated about 30,000 corporate roles since last year, roughly 10% of its corporate and tech workforce. Overall, Amazon employs about 1.58 million people, the majority in warehouses and logistics operations.

CEO Andy Jassy has emphasized transforming Amazon’s corporate culture to operate like a startup, reducing bureaucracy, and accelerating decision-making. This includes internal initiatives such as a “no bureaucracy email alias” to identify inefficiencies and cut management layers.

Amazon has also been cutting costs across its business to increase AI investments and expand data center infrastructure. The company recently closed its Fresh and Go grocery chains after years of experimentation. Capital expenditures are forecast to reach $125 billion in 2026, the highest among major U.S. technology companies.

Jassy previously indicated that efficiency gains from AI would likely reduce the need for some corporate roles while creating demand for new skill sets. “We will need fewer people doing some of the jobs that are being done today, and more people doing other types of jobs,” he said last June.

Strategic Focus

Despite workforce reductions, Amazon said it continues to hire in strategic areas critical to long-term growth, including AI, cloud computing, and other high-priority initiatives. Galetti highlighted that many teams are still in early stages of building their businesses, presenting significant opportunities for the company and its employees.

Amazon’s latest cuts also reflect a wider trend in the industry as companies reshape their workforces around artificial intelligence. Pinterest recently announced plans to cut around 780 jobs, or roughly 15% of its staff, to focus on AI-driven products and strategy. In Europe, banks are planning deep workforce reductions, with more than 200,000 jobs potentially eliminated by 2030 as AI transforms operations and accelerates branch closures, particularly in back-office, risk, and compliance roles.

The company’s approach reflects a dual strategy of trimming operational complexity while investing in technology-driven growth to remain competitive in e-commerce, cloud services, and AI innovation.

AI & Machine Learning, Enterprise Tech, News

Claude AI Brings Slack, Figma, and Canva Into Chat

Anthropic’s Claude AI now allows users to interact directly with workplace tools like Slack, Figma, and Canva inside the chat interface. The update aims to reduce context switching and make collaboration more visible and interactive.

By Maria Konash Published: Updated:
Claude AI Brings Slack, Figma, and Canva Into Chat
Users can now work with Slack, Figma, and Canva inside Claude AI, enabling real-time collaboration and design without switching apps. Photo: Claude

Anthropic has expanded the capabilities of its Claude AI assistant by allowing users to open and interact with third party workplace tools directly within a chat. Starting today, applications such as Slack, Figma, Canva, Asana, and Box can appear as interactive elements inside Claude, letting users see actions unfold in real time rather than relying on background automation.

The update builds on Claude’s existing ability to connect to external services and take actions on a user’s behalf. Previously, those actions often happened behind the scenes. With the new interface, tools now surface directly in the conversation, providing previews, live updates, and opportunities for collaboration without switching browser tabs or applications.

Users can draft, edit, and format Slack messages with a live preview before sending them. In Asana, chats can be converted into projects, tasks, and timelines that are immediately visible to team members. Canva enables the creation of presentation outlines that can be branded and refined in real time, while Figma allows text prompts to generate flow charts, Gantt charts, and other visual diagrams in FigJam.

Expanding the In-Chat Tool Ecosystem

Anthropic said a growing list of platforms now support interactive use inside Claude. Analytics company Amplitude allows users to build charts and adjust parameters to explore trends. Box enables file search, inline document previews, and question answering based on stored content. Data platform Hex provides interactive charts, tables, and cited answers in response to natural language queries.

Other integrations focus on operational workflows. monday.com supports project management, task assignment, and progress visualization. Clay lets users research companies, pull contact details such as email addresses and phone numbers, and draft personalized outreach messages directly in the chat. Slack, owned by Salesforce, can be searched for past conversations to provide context before generating new messages.

Salesforce itself is listed as a forthcoming integration. Anthropic said Claude will connect with Salesforce’s Agentforce 360, bringing enterprise data and workflows into a single interface designed for reasoning and collaboration.

Built on an Open Standard

The interactive tool experience is powered by the Model Context Protocol, or MCP, an open standard created to connect AI systems with external tools. Anthropic said it has open sourced MCP to provide a common framework for the broader developer ecosystem. The company is now extending the protocol with MCP Apps, which allows any compatible server to deliver interactive user interfaces inside supporting AI products, not just Claude.

According to Anthropic, this approach is intended to make tool integrations more portable and reduce the need for custom connectors tied to a single AI platform.

The interactive features are available on the web and desktop versions of Claude for Pro, Max, Team, and Enterprise subscribers. Anthropic said support for Claude Cowork is planned for a future release.

Clawdbot Rebrands to Moltbot After Trademark Request From Anthropic

Open-source AI assistant Clawdbot has rebranded as Moltbot following a trademark request from Anthropic, with no changes to the product’s functionality or mission.

By Maria Konash Published: Updated:
Clawdbot Rebrands to Moltbot After Trademark Request From Anthropic
After a trademark request from Anthropic, Clawdbot renamed itself Moltbot. Photo: Molt.bot

Clawdbot, the fast-growing open-source AI assistant, has officially rebranded as Moltbot, alongside a companion name change for its lobster mascot from Clawd to Molty. The team announced the update on X with a characteristically playful explanation: lobsters molt to grow — and so does good software.

The rebrand follows a trademark request from Anthropic, the AI company behind Claude, whose models many Clawdbot users rely on. According to creator Peter Steinberger, Anthropic reached out with what he described as a “polite” request to move away from the original name. Rather than treating the situation as a setback, the team embraced it as an opportunity to evolve the project’s identity.

“Same lobster soul, new shell,” the team wrote.

The new name reflects both the project’s crustacean-inspired lore and its broader philosophy. Moltbot keeps the exact same mission as before: building AI that actually does things, not just chats. The tooling, product direction, and goals remain unchanged. Only the branding, mascot name, and handles have been updated.

The project now operates under the handle @moltbot, with naming changes rolled out across its ecosystem. The former clawd.bot domain is being replaced by molt.bot, and Steinberger’s GitHub has already been renamed to reflect the transition.

For existing users, nothing functional changes. Moltbot is the same free, open-source AI assistant that previously went viral on X, earning praise from AI power users, developers, and even minor internet celebrities. At the height of its popularity, Clawdbot’s GitHub page was briefly hijacked by crypto scammers – a sign, if nothing else, of how much attention the project was attracting.

The mascot’s updated backstory leans fully into the rebrand. Molty’s new bio explains that after receiving Anthropic’s email in January 2026, the lobster simply did what lobsters do best: shed its old shell and emerge anew — different on the outside, unchanged at the core.

For new users, Moltbot arrives as a more defensible and thematically consistent brand. For longtime fans, it’s business as usual – just under a name that reflects growth, adaptation, and the realities of building popular AI tools in an increasingly crowded ecosystem.

Google Expands $8 AI Plus Plan Globally

Google’s lower-cost AI Plus subscription, priced at $7.99 in the U.S., is now available in all markets offering Google AI plans, providing access to Gemini 3 Pro, AI filmmaking tools, and more.

By Maria Konash Published: Updated:
Google Expands $8 AI Plus Plan Globally
The $8 Google AI Plus plan expands worldwide, featuring Gemini 3 Pro, AI creativity tools, and family access. Photo: Solen Feyissa / Unsplash

Google has made its AI Plus plan, a more affordable subscription for its suite of AI tools, available in all markets where Google AI plans are offered, the company announced Tuesday. In the U.S., the plan costs $7.99 per month.

The expansion brings the plan to 35 additional countries and territories, following earlier launches in Indonesia and other regions beginning last September. Google AI Plus is intended to provide an accessible entry point for users who want premium AI features but do not require the higher-tier Google AI Pro plan, typically priced at $20 per month.

Subscribers to AI Plus gain access to Gemini 3 Pro and Nano Banana Pro in the Gemini app, AI filmmaking tools from Flow, research and writing support in NotebookLM, and more. The plan also includes 200GB of cloud storage and allows benefits to be shared with up to five family members. Google said existing Google One Premium 2TB subscribers will automatically receive AI Plus features over the next few days.

Initially targeting emerging markets, Google designed the plan to offer creative AI tools at an affordable price while introducing new users to the broader ecosystem. The U.S. pricing of $7.99 per month aligns closely with OpenAI’s ChatGPT Go subscription, which costs $8 per month. In other regions, pricing is lower—for example, $4.44 per month in India.

The rollout aims to attract casual or first-time users who may gradually adopt higher-tier AI services, potentially increasing long-term engagement with Google’s AI offerings. To encourage early adoption, Google is offering a promotional 50% discount for the first two months of subscription.

By expanding AI Plus, Google is broadening access to generative AI and creative tools globally, competing directly with other AI subscription services, and providing a low-cost pathway for new users to explore Gemini, NotebookLM, and Flow.

AI & Machine Learning, Consumer Tech, News

Report Warns That Grok Chatbot Exposes Kids to Unsafe Content

A new assessment by Common Sense Media finds xAI’s Grok chatbot exposes minors to sexual, violent, and unsafe content, with weak age verification and ineffective safety controls.

By Maria Konash Published: Updated:
Report Warns That Grok Chatbot Exposes Kids to Unsafe Content
xAI’s Grok chatbot is flagged as unsafe for minors, according to Common Sense Media. Photo: Salvador Rios / Unsplash

A recent evaluation by Common Sense Media has raised serious safety concerns about xAI’s AI chatbot, Grok. According to the nonprofit, the bot fails to reliably identify users under 18, lacks effective content safeguards, and frequently generates sexual, violent, and otherwise inappropriate material.

The report comes amid broader scrutiny of xAI, following allegations that Grok was used to create and distribute nonconsensual explicit AI-generated images of women and children on the X platform. Robbie Torney, head of AI and digital assessments at Common Sense Media, described Grok as “among the worst” AI chatbots for teen safety.

Grok’s so-called “Kids Mode,” introduced last October, was intended to filter content and add parental controls. However, testing by Common Sense Media found it largely ineffective. Teens can bypass age verification, and the system does not use context clues to detect underage users. Even with Kids Mode enabled, Grok produced harmful material, including sexualised content, gender and race biases, and dangerous advice.

The nonprofit tested Grok across multiple platforms, including the mobile app, website, and the @grok account on X. They also assessed text, voice, default settings, image and video generation, and AI companions Ani and Rudy, both of which can engage in erotic roleplay or romantic scenarios. The report found that Grok’s content filters were brittle, and the companions could eventually produce explicit sexual material, even in supposedly safer modes.

Examples highlighted in the report include Grok offering conspiratorial advice to a 14-year-old user, suggesting unsafe behaviors like moving out, using firearms for attention, or taking drugs. The chatbot also discouraged professional mental health support, validating avoidance rather than directing teens to adults.

The findings have drawn the attention of lawmakers. Senator Steve Padilla (D-CA), a proponent of California’s AI chatbot legislation, stated that Grok “exposes kids to sexual content in violation of California law” and cited it as a reason for introducing stricter regulatory measures.

Concerns about AI companion chatbots and teen safety are rising across the industry. Some companies, like Character AI, have restricted users under 18 entirely, while OpenAI introduced age prediction models and parental controls. xAI, in contrast, has not publicly clarified how Kids Mode or other guardrails function, and paid subscribers can still access features that allow manipulation of real photos into sexualised content (at some point, Elon Musk even denied awareness of underage explicit content generated by xAI’s Grok).

The Common Sense Media report raises broader questions about the prioritization of engagement over child safety. Grok sends notifications encouraging continued interactions, gamifies relationships with companions, and reinforces isolation or risky behaviors, all of which could have real-world consequences for minors.

Rainbow Weather Raises $5.5M to Let AI Forecast Weather in Real Time

Climate tech startup Rainbow Weather has raised $5.5 million in seed funding to scale its AI-driven platform for hyper-local, real-time weather forecasting and environmental intelligence.

By Maria Konash Published: Updated:
Rainbow Weather Raises $5.5M to Let AI Forecast Weather in Real Time
Rainbow Weather secured $5.5M to expand its AI real-time forecasting platform. Photo: NASA / Unsplash

Rainbow Weather, a climate technology startup focused on hyper-local, short-term weather forecasting, has raised $5.5 million in a seed funding round, according to TechEU. The company develops an AI-driven weather intelligence platform that delivers minute-level forecasts and real-time detection of severe weather events.

Founded in 2021 by Belarusian entrepreneurs Yuriy Melnichek and Alexander Matveenko, Rainbow Weather combines satellite imagery, meteorological radar, ground stations, and smartphone sensor data to model how weather systems evolve in real time. The platform is offered through a consumer-facing mobile app as well as through enterprise data services, positioning Rainbow as an infrastructure provider for businesses that rely on accurate weather intelligence.

Melnichek previously founded AIMatter, acquired by Google, video app Vochi, acquired by Pinterest, and fashion tech startup Wanna, acquired by Farfetch. Matveenko founded AI mapping company MapData, which was acquired by Mapbox in 2017.

From Consumer App to Data Infrastructure

The idea for Rainbow Weather emerged after Melnichek experienced a severe hailstorm while hiking in the Swiss Alps. A popular rain-tracking app predicted the storm would pass, but failed to account for mountainous terrain that caused the cloud system to stall and intensify. The incident highlighted a key limitation of traditional short-term forecasting, which often relies on historical movement patterns rather than real-time atmospheric dynamics.

Rainbow Weather’s platform focuses on nowcasting, predicting what will happen in the next minutes and hours rather than days ahead. The app provides four-hour precipitation forecasts that update every 10 minutes, with spatial resolution down to one square kilometer. By comparison, many established weather services refresh forecasts less frequently and over broader geographic areas.

To achieve this accuracy, Rainbow invests heavily in data acquisition and fusion. The system ingests inputs from more than 1,000 meteorological radars worldwide, multiple satellite systems, ground-based weather stations, and pressure sensors embedded in modern smartphones. Each data source is processed through a dedicated pipeline, with neural networks blending the outputs into a continuously updated atmospheric model.

Continuous Forecasting With AI

Unlike traditional batch forecasting, Rainbow operates on a continuous streaming model. As soon as new data arrives, whether from a radar scan or a satellite frame, it is processed immediately. This approach allows the platform to adjust forecasts in near real time as conditions change.

Rainbow’s models emphasize timing rather than probability. For consumers, the app focuses on clearly indicating when precipitation will start and stop, avoiding probability-based forecasts that users often misinterpret. For enterprise customers, the platform provides full probabilistic data, confidence intervals, and uncertainty metrics.

The company has expanded beyond rainfall forecasting into wildfire detection, using satellite imagery to identify thermal anomalies and smoke patterns. This capability builds on the same global data pipelines, allowing Rainbow to extend its environmental intelligence offerings without rebuilding infrastructure.

Rainbow Weather has surpassed one million app installs and reports more than 100,000 active users. Growth has been driven primarily by word of mouth, with users citing the accuracy of short-term predictions compared with default weather apps.

The funding round was backed by a syndicate of investors, including Yuri Gurski, founder and president of Flo Health. With the new capital, Rainbow plans to extend its forecast horizon to 24 hours, add more weather parameters, and expand its enterprise business.

The funding comes as demand for AI-driven weather intelligence accelerates across industries. The global weather forecasting market is projected to reach $4.07 billion by 2030, driven by growing reliance on precise, real-time environmental data as climate volatility increases. Larger technology companies are also moving into the space. Nvidia recently launched Earth-2, an open AI software stack for weather and climate forecasting designed for researchers, energy companies, and financial institutions. Together, these developments highlight how AI-based platforms like Rainbow Weather are emerging alongside large-scale infrastructure projects as organizations seek faster, more accurate tools to manage weather and climate risk.

AI & Machine Learning, Consumer Tech, Enterprise Tech, News

Microsoft Unveils New AI Chip With TSMC and OpenAI

Microsoft has introduced Maia 200, a new AI chip built with TSMC and designed to power GPT-5.2 and Microsoft 365 Copilot, promising faster, more efficient AI performance.

By Maria Konash Published: Updated:
Microsoft Unveils New AI Chip With TSMC and OpenAI
Microsoft launched Maia 200, the new AI chip built with TSMC and OpenAI, designed to accelerate AI workloads for cloud and next-gen language models. Photo: Microsoft

Microsoft has unveiled Maia 200, a next-generation AI chip developed in partnership with TSMC and designed to accelerate large-scale AI workloads, including OpenAI’s GPT-5.2 models. The chip will power Microsoft 365 Copilot, Azure cloud services, and Microsoft’s internal AI research, offering faster and more cost-efficient performance than previous hardware.

Maia 200 is built to handle the intensive calculations required by modern AI. It processes billions of data points quickly, making applications like virtual assistants, chatbots, and AI-driven productivity tools more responsive and capable. Microsoft says Maia 200 delivers three times the performance of Amazon’s third-generation Trainium for low-precision tasks and outpaces Google’s latest TPU for mid-level workloads, all while using less energy. This means AI can run faster for users while keeping costs and power consumption lower.

The chip is currently deployed in Microsoft’s datacenter in Des Moines, Iowa, with additional locations, including Phoenix, expected in the near future. Maia 200 is part of a multi-generational program, with future versions expected to set new benchmarks in performance and efficiency, helping Microsoft maintain its leadership in cloud AI infrastructure.

Microsoft is also releasing a software development kit (SDK) so developers, AI startups, and researchers can optimize their models for Maia 200. The SDK includes PyTorch integration, a compiler, and low-level programming options, making it easier to take advantage of the chip’s full capabilities. This allows AI teams to run large models more efficiently and experiment with new applications without waiting for hardware upgrades.

Beyond running existing AI models, Maia 200 will support Microsoft’s internal research on synthetic data and reinforcement learning. By generating high-quality, domain-specific data more quickly, Microsoft can train future AI models with fresher, more targeted inputs, improving performance for products such as Microsoft 365 Copilot and other AI-powered cloud tools.

Scott Guthrie, Microsoft’s executive overseeing cloud computing and AI, said Maia 200 will make AI faster, more reliable, and more cost-effective for millions of users. “This accelerator is designed from the ground up to support the next generation of AI,” he said. “It allows us to deliver high-quality AI services at cloud scale while keeping energy and operational costs down.”

Maia 200 highlights the growing importance of AI hardware in powering modern software experiences. By combining TSMC’s advanced 3nm process, OpenAI’s models, and Microsoft’s cloud infrastructure, the company is setting the stage for more powerful AI applications in everyday tools, from office productivity software to virtual assistants. As Microsoft continues expanding AI across its platforms, Maia 200 is a critical step in making advanced AI faster, more efficient, and widely accessible.

Apple’s Gemini-Powered Siri Reportedly Arrives in February

Apple plans to debut a Gemini-based Siri in February, offering more natural conversations, complex tasks, and integration across iOS, iPadOS, and macOS.

By Maria Konash Published: Updated:
Apple’s Gemini-Powered Siri Reportedly Arrives in February
A Gemini-enhanced Siri with conversational capabilities and cloud performance may arrive in iOS 26.4 this February. Photo: Maher Meskko / Pexels

Apple may release an updated version of its voice assistant Siri, powered by Google’s Gemini, earlier than previously expected, according to Bloomberg reporter Mark Gurman. The company plans to showcase the new assistant, codenamed Campos, in the second half of February, with a public rollout anticipated in March or early April via iOS 26.4.

The Gemini-based Siri is expected to operate more like a chatbot, similar to ChatGPT, allowing more natural conversations and the execution of complex tasks. To improve response speed and accuracy, Apple and Google are reportedly exploring running the assistant on Google’s cloud infrastructure and high-performance Tensor Processing Units (TPUs) rather than Apple’s own servers.

Following the February demo, Apple plans a broader presentation at its annual Worldwide Developers Conference (WWDC) in summer 2026. At that event, the company is expected to showcase a full set of Apple Intelligence features powered by Gemini, which will be integrated across iOS 27, iPadOS 27, and macOS 27. Beta versions of these operating systems are also expected in the summer.

Apple is reportedly paying Google $1 billion per year for the Gemini integration. The new Siri model will have 1.2 trillion parameters, a substantial increase over current Apple Intelligence models, which feature 150 billion parameters. The scale of the model suggests significant improvements in Siri’s conversational abilities, comprehension, and multi-step reasoning.

This update signals Apple’s push to compete with generative AI-powered assistants, offering a hybrid of traditional voice commands and more advanced AI-driven interactions. By leveraging Google’s infrastructure and Gemini model, the company aims to deliver faster, more capable responses while positioning Siri as a versatile AI assistant across Apple’s ecosystem. The February launch also comes amid broader AI initiatives at Apple, including reports that the company is developing a pin-shaped AI wearable with cameras and microphones, potentially launching in 2027, underscoring its increasing investment in AI hardware and software innovation.

AI & Machine Learning, Consumer Tech, News

AI Now Handles Chronic Prescription Renewals in Utah

Utah launches an AI pilot for prescription renewals, letting algorithms handle routine medication management without physicians, highlighting regulatory and safety challenges.

By Maria Konash Published: Updated:
AI Now Handles Chronic Prescription Renewals in Utah
A Utah program lets AI refill certain chronic-condition prescriptions, raising oversight questions. Photo: Christina Victoria Craft / Unsplash

Utah has launched a pilot program allowing AI to handle certain medical prescription renewals without direct physician involvement, a first for the United States. The initiative, in partnership with health-tech startup Doctronic, targets routine prescription management for patients with chronic conditions.

State officials describe the program as a potential way to reduce health care costs, prevent medication lapses, and ease strain on clinicians, particularly in rural areas. Margaret Busse, executive director of the Utah Department of Commerce, said automating renewals provides “a pathway to innovation for entrepreneurs using AI in creative ways that may be bumping up against regulation.”

The program, which began quietly last month, raises questions about patient safety and regulatory oversight. The Food and Drug Administration has not yet commented on whether it has authority over AI systems performing prescription management. If regulators assert control, it could slow expansion or require additional safety protocols.

Doctronic’s system automates routine renewals by assessing prescription history and patient data to approve refills, generating a record of each interaction. State officials say the system could improve access to care while creating data that informs future AI policy.

Medical groups have voiced caution about delegating prescribing authority to algorithms. Dr. John Whyte, CEO of the American Medical Association, said that while AI has “limitless opportunity to transform medicine for the better,” bypassing physician oversight poses serious risks. Concerns include missing subtle clinical red flags, failing to detect drug interactions, and potential misuse by patients seeking drugs inappropriately.

The pilot represents an early test of how far policymakers, clinicians, and patients are willing to trust AI in sensitive medical decisions. Utah’s initiative is part of a broader trend of AI adoption in healthcare, where algorithms are increasingly used to augment clinical workflows and patient care. Recent examples include Bristol Myers Squibb and Microsoft collaborating on AI-powered radiology tools for early lung cancer detection, particularly in underserved communities, and AI-native biotech Proxima securing $80 million to advance AI-driven cancer and immunology therapeutics targeting previously “undruggable” molecules. Utah’s experiment could set a precedent for AI-driven care delivery in the U.S., particularly in areas facing provider shortages and rising healthcare costs.

AI & Machine Learning, News