Anthropic Launches Claude Design for AI-Powered Visual Creation

Anthropic has introduced Claude Design, a new tool for creating designs, prototypes, and presentations using AI. The product targets both designers and non-designers.

By Daniel Mercer Edited by Maria Konash Published:

Anthropic has launched Claude Design, a new product that enables users to create visual assets such as prototypes, presentations, and marketing materials through natural language collaboration. The tool is powered by Claude Opus 4.7 and is rolling out in research preview to Claude Pro, Max, Team, and Enterprise subscribers.

The release marks Anthropic’s expansion beyond text and coding into design workflows, positioning Claude as a broader creative assistant. Users can generate initial designs from prompts and refine them iteratively through conversation, direct edits, or interface controls generated by the model itself.

Claude Design is aimed at both professional designers and non-specialists such as product managers, founders, and marketers. The company says the tool enables faster exploration of ideas, allowing users to create multiple design directions, interactive prototypes, and production-ready assets without traditional design tooling or coding expertise.

Built for Collaborative and Iterative Design

The system is structured around a conversational workflow. Users can start from text prompts, uploaded files such as documents or presentations, or even existing codebases. Claude then generates a visual draft that can be refined through inline comments, layout adjustments, and real-time edits.

A key feature is the ability to automatically generate and apply a company’s design system. During onboarding, Claude can analyze existing design files and code to establish consistent styles, including typography, color schemes, and components. This ensures that outputs remain aligned with brand guidelines across projects.

Claude Design also supports team collaboration, with shared documents and group editing. Teams can control access levels, allowing designs to remain private, viewable, or editable within an organization. Final outputs can be exported in formats such as PDF, PPTX, HTML, or integrated into platforms like Canva.

From Concept to Production Workflow

Anthropic is positioning Claude Design as part of a broader product ecosystem. Designs can be handed off directly to development workflows through integration with coding tools, enabling a transition from prototype to implementation with minimal friction.

The platform supports a wide range of use cases, including interactive product prototypes, wireframes, pitch decks, and marketing assets. It also introduces more advanced capabilities such as generating code-driven prototypes with multimedia elements like video, voice, and 3D components.

The release reflects a broader trend among AI companies to expand into end-to-end creative and productivity tools. By combining generation, editing, and collaboration in a single interface, Claude Design aims to reduce the gap between ideation and execution.

Anthropic said the feature is included within existing subscription plans, with usage tied to model limits. Enterprise access is disabled by default and must be enabled by administrators. The company plans to expand integrations with third-party tools in the coming weeks, suggesting a continued push to embed AI more deeply into everyday creative workflows.

AI & Machine Learning, News

Nvidia Unveils Lyra 2.0 for Real-Time 3D World Generation

Nvidia has introduced Lyra 2.0, an AI model that generates explorable 3D worlds in real time. It addresses key issues like motion drift and inconsistent object rendering.

By Daniel Mercer Edited by Maria Konash Published:
Nvidia Unveils Lyra 2.0 for Real-Time 3D World Generation
Nvidia launches Lyra 2.0, enabling real-time 3D world generation with consistent, explorable environments. Image: Nvidia

Nvidia has unveiled Lyra 2.0, a new AI model designed to generate explorable 3D environments in real time. The system enables users to move freely through AI-generated worlds while maintaining spatial consistency, addressing a major limitation in existing generative video and 3D models.

Lyra 2.0 builds on recent advances in video-based scene generation, where AI models create camera-controlled walkthroughs and convert them into 3D environments. The model allows users to navigate dynamically through these spaces, effectively rendering new parts of the world as they explore. This approach supports real-time interaction and opens the door to applications in simulation, gaming, and robotics.

A key differentiator is Lyra 2.0’s ability to maintain consistent geometry over long sequences. Competing systems often struggle with tracking motion over time, leading to visual artifacts such as shifting objects, blurring, or inconsistent scene reconstruction. Nvidia’s model is designed to overcome these issues, enabling stable navigation across complex environments without degradation.

Fixing Drift and Inconsistency in 3D Generation

Lyra 2.0 tackles two core technical challenges: motion drift and spatial inconsistency. In many generative systems, small errors accumulate as the model produces frames over time, eventually distorting the scene. At the same time, previously generated areas may be forgotten, causing the model to recreate them inaccurately when revisited.

To address this, Lyra 2.0 maintains a form of spatial memory by storing per-frame geometry. This allows the model to reference previously seen areas and preserve structural consistency. It also uses a training approach that exposes the system to its own imperfect outputs, helping it learn to correct errors instead of amplifying them.

The result is a system capable of generating longer, more coherent 3D sequences, even when users move in different directions or revisit earlier parts of the environment.

Real-Time Exploration and Simulation Potential

Lyra 2.0 includes an interactive interface that lets users explore generated environments freely, rather than following a fixed path. As users move, the system continuously expands the world, generating new regions while maintaining alignment with previously created structures.

The generated environments can also be exported into simulation tools such as NVIDIA Isaac Sim, making them suitable for robotics training and testing. This could reduce the time and cost required to build large-scale simulation environments.

The release highlights Nvidia’s broader push into generative AI for spatial computing. By combining video generation with 3D reconstruction and real-time interaction, Lyra 2.0 moves closer to enabling fully AI-generated virtual worlds that can be explored, manipulated, and deployed across industries.

AI & Machine Learning, News

Anthropic CEO Set for White House Meeting Over Mythos AI

Anthropic CEO Dario Amodei is expected to meet White House officials as tensions ease over its Mythos AI model. The talks signal potential renewed government collaboration.

By Maria Konash Published:
Anthropic CEO Set for White House Meeting Over Mythos AI
Dario Amodei meets White House as U.S. weighs Mythos AI for cybersecurity despite Pentagon tensions. Image: David Everett Strickler / Unsplash

Anthropic CEO Dario Amodei is scheduled to meet Susie Wiles at the White House on Friday, according to a report by Axios, signaling a possible breakthrough in the company’s dispute with the U.S. Department of Defense. The meeting comes as the Trump administration reassesses the strategic value of Anthropic’s latest AI model, Claude Mythos Preview.

The reported discussions follow a period of tension between Anthropic and the Pentagon, which had previously cut business ties with the company after a contract disagreement. Despite that setback, U.S. officials are now said to be recognizing the model’s advanced capabilities, particularly in cybersecurity contexts where it can simulate or test defense systems against sophisticated threats.

Mythos was introduced earlier this month as part of Anthropic’s “Project Glasswing,” a controlled deployment initiative that allows select organizations to access the model for defensive cybersecurity applications. The system has drawn attention for its ability to model high-level cyberattack scenarios, raising both interest and concern within government circles.

Government Interest in Advanced AI Capabilities

According to the report, the Trump administration is considering broader use of the technology across federal agencies. A separate report by Bloomberg indicated that a version of the Mythos model could be made available to major government departments, suggesting a shift toward closer collaboration despite earlier disputes.

Sources cited by Axios argue that limiting access to such advanced AI systems could undermine U.S. competitiveness, particularly against geopolitical rivals like China. The argument reflects a growing view within policy circles that frontier AI capabilities are becoming strategically important assets, especially in cybersecurity and defense.

Anthropic has not publicly commented on the reported meeting, and Reuters noted it could not independently verify the details. However, the company has previously confirmed ongoing discussions with the administration. Co-founder Jack Clark said earlier this week that conversations with government officials were continuing even after the Pentagon ended its formal relationship with the company.

From Dispute to Potential Partnership

The planned meeting suggests a potential reset in relations between Anthropic and U.S. defense stakeholders. While earlier disagreements led to a breakdown in cooperation, the renewed interest in Mythos highlights how rapidly evolving AI capabilities are reshaping government priorities.

The situation also underscores a broader tension in AI governance: balancing national security interests with concerns about misuse. Models like Mythos, designed to simulate advanced cyber capabilities, can serve both defensive and potentially offensive purposes, making controlled access and oversight critical.

If discussions lead to formal agreements, Anthropic could re-emerge as a key partner in U.S. government AI initiatives, particularly in cybersecurity. The outcome may also influence how other AI developers engage with federal agencies, as governments increasingly seek access to cutting-edge systems while navigating safety and policy constraints.

OpenAI Launches GPT-Rosalind for Biology and Drug Discovery

OpenAI has introduced GPT-Rosalind, a specialized AI model for life sciences research. The system aims to accelerate drug discovery and biological analysis workflows.

By Laura Bennett Edited by Maria Konash Published:
OpenAI Launches GPT-Rosalind for Biology and Drug Discovery
OpenAI unveils GPT-Rosalind, a life sciences model accelerating drug discovery and genomics. Image: OpenAI

OpenAI has introduced GPT-Rosalind, a new domain-specific AI model designed to support research in biology, drug discovery, and translational medicine. The model is being released as a research preview through a controlled access program, reflecting both its advanced capabilities and the sensitivity of its potential applications.

GPT-Rosalind is built to address one of the most complex challenges in life sciences: the fragmented and time-intensive workflows that underpin early-stage discovery. Developing a new drug can take 10 to 15 years, with early research decisions having compounding effects on downstream outcomes. The model is designed to help scientists navigate large volumes of literature, datasets, and experimental variables more efficiently, while also generating and testing new hypotheses.

The system is available through ChatGPT, Codex, and the API for qualified enterprise users. OpenAI is also launching a Life Sciences research plugin for Codex, enabling integration with more than 50 scientific databases and tools. Early collaborators include major pharmaceutical and research organizations such as Amgen, Moderna, Allen Institute, and Thermo Fisher Scientific.

Built for Complex Scientific Workflows

Unlike general-purpose AI models, GPT-Rosalind is optimized for reasoning across specialized domains including chemistry, genomics, protein engineering, and disease biology. It is designed to assist with multi-step research tasks such as literature review, experimental planning, sequence analysis, and data interpretation.

OpenAI reports that the model shows improved performance on benchmarks related to biochemical reasoning, including protein structure analysis, phylogenetics, and experimental design. It also demonstrates stronger ability to use external tools and databases within complex workflows, a critical requirement for real-world scientific research.

In industry evaluations, GPT-Rosalind achieved leading results on bioinformatics benchmarks such as BixBench and outperformed earlier models on several tasks in LABBench2, including molecular cloning design. In collaboration with Dyno Therapeutics, the model also ranked above most human experts on certain RNA prediction tasks.

Controlled Access and Research Integration

Given the potential risks associated with advanced biological research tools, OpenAI is deploying GPT-Rosalind through a “trusted access” model. Organizations must meet criteria related to legitimate scientific use, governance, and security controls before gaining access. The rollout initially focuses on enterprise users in the United States.

The accompanying Life Sciences plugin provides an orchestration layer for scientific workflows, connecting researchers to public datasets, literature sources, and domain-specific tools. This allows the model to move beyond static responses and actively support research processes such as protein structure lookup, sequence search, and dataset discovery.

OpenAI said the system was developed with enhanced security measures and is intended for use in controlled research environments. During the preview phase, usage will not consume standard API credits, though safeguards are in place to prevent misuse.

The release marks the first step in a broader effort to build AI systems tailored to scientific discovery. OpenAI says future iterations will expand the model’s capabilities for long-horizon, tool-intensive workflows, with ongoing collaborations across academia, biotech, and national laboratories aimed at advancing areas such as protein and catalyst design.

AI & Machine Learning, News, Research & Innovation

NTU Singapore Develops AI-Powered Biochip for Rapid Disease Detection

Researchers at NTU Singapore have developed an AI-powered biochip that detects disease-linked microRNAs in minutes. The system could enable faster, more precise diagnostics.

By Laura Bennett Edited by Maria Konash Published:
NTU Singapore Develops AI-Powered Biochip for Rapid Disease Detection
NTU Singapore unveils AI biochip detecting microRNA in 20 minutes for faster, high-accuracy diagnostics. Image: Nanyang Technological University

A research team from Nanyang Technological University has developed a new AI-powered biochip capable of rapidly detecting microRNAs, tiny genetic markers linked to diseases including cancer and cardiovascular conditions. The system, described in the journal Advanced Materials, combines nanophotonic sensing with automated image analysis to significantly reduce diagnostic time.

The platform can analyze a small blood sample and detect multiple microRNA biomarkers in about 20 minutes, compared to several hours required by traditional methods such as PCR (polymerase chain reaction). Researchers say the system achieves high sensitivity and accuracy, detecting extremely low concentrations of microRNAs, even down to a few molecules.

The work was led by Associate Professor Chen Yu-Cheng, who said the goal is to create a scalable diagnostic platform capable of screening multiple disease markers quickly and accurately. Initial testing focused on microRNAs linked to non-small cell lung cancer, demonstrating the system’s ability to identify multiple targets simultaneously without complex sample preparation.

How the Technology Works

At the core of the system is a nanophotonic chip embedded with nanocavities, microscopic structures that enhance fluorescent signals when microRNAs bind to specific probes. These cavities amplify weak signals, making it possible to detect even single molecules.

The chip is paired with an AI imaging system that captures and analyzes thousands of signals in a single snapshot. Using a deep learning model based on Mask R-CNN, the system automatically identifies and classifies microRNA signals, removing the need for manual counting and reducing the risk of human error.

Unlike conventional approaches, which often require amplification or labeled probes, the NTU platform directly measures microRNAs in liquid samples. The researchers report accuracy levels exceeding 99 percent across test scenarios, including experiments using both cancer cell extracts and synthetic samples.

Toward Faster, Scalable Diagnostics

The team has also built a compact prototype that includes a camera and a mobile application for real-time analysis. This setup could support point-of-care testing, where results are generated quickly without the need for specialized laboratory infrastructure.

Researchers believe the platform could eventually be adapted for large-scale screening, potentially analyzing hundreds or thousands of biomarkers from blood, saliva, or urine samples. This could open the door to earlier disease detection, better monitoring of treatment response, and more personalized healthcare.

Independent experts note that microRNAs have long been considered promising biomarkers but have been difficult to measure reliably due to their small size and similarity. A system that can accurately detect multiple microRNAs could improve clinical decision-making, particularly in oncology and chronic disease management.

The project is supported by Singapore’s research funding programs, and the team has filed a technology disclosure through NTU’s commercialization arm. Future work will focus on clinical validation and scaling the platform for broader use in healthcare and pharmaceutical research.

AI & Machine Learning, News, Research & Innovation

OpenAI Expands Codex Into Full Software Development Assistant

OpenAI has upgraded Codex with computer control, memory, and workflow integrations. The update pushes Codex beyond coding into a full development lifecycle assistant.

By Daniel Mercer Edited by AIstify Team Published: Updated:

OpenAI has released a major update to Codex, significantly expanding its capabilities beyond code generation into a broader software development assistant. The update targets the more than 3 million developers using Codex weekly and reflects a growing push toward AI systems that can manage end-to-end workflows rather than isolated tasks.

The new version introduces “computer use,” allowing Codex to interact directly with a user’s device by seeing screens, clicking, and typing via its own cursor. Multiple AI agents can run in parallel on macOS without interfering with user activity. This enables tasks such as testing applications, iterating on front-end designs, and working with tools that lack APIs.

Codex now also includes an in-app browser, enabling developers to annotate web pages and guide the AI in real time. This feature is aimed at improving workflows in frontend and game development, with plans to expand toward broader browser automation. In parallel, Codex gains image generation capabilities through integration with OpenAI’s image model, allowing developers to create and refine visual assets such as mockups and UI concepts directly within the development process.

Deeper Integration Across Developer Tools

The update introduces more than 90 new plugins, expanding Codex’s ability to connect with commonly used tools and services. These include integrations with platforms such as Jira, GitLab, CircleCI, and Microsoft Office, among others. The plugins combine app integrations and external servers to give Codex more context and execution capabilities across workflows.

Within the Codex app, new features support key development tasks such as reviewing pull requests, addressing code review comments, and managing multiple terminal sessions. Developers can also connect to remote development environments via SSH and access files with rich previews for documents, spreadsheets, and presentations. A new summary pane helps track agent actions, sources, and outputs.

These additions are designed to reduce context switching, allowing developers to move between writing code, reviewing outputs, and collaborating with AI in a single environment.

Automation, Memory, and Long-Running Tasks

Codex is also gaining stronger automation capabilities. Users can now reuse conversation threads to preserve context across sessions and schedule tasks for the future. The system can “wake up” and continue work on long-running processes, spanning days or weeks.

A preview of memory features allows Codex to retain user preferences, corrections, and previously gathered information. This enables more personalized and efficient task execution over time, reducing the need for repeated instructions.

Additionally, Codex can proactively suggest next steps based on project context. For example, it can identify unresolved comments in documents, pull updates from tools like Slack or Notion, and generate a prioritized task list to help users resume work quickly.

Availability and Direction

The update is rolling out to Codex desktop users signed in with ChatGPT, with some features such as memory and personalization expanding later to enterprise and regional users. Computer control capabilities are initially limited to macOS.

The release highlights a broader shift in AI development tools. Codex is evolving from a coding assistant into a system that can coordinate tasks, manage workflows, and assist with decision-making across the software lifecycle. This positions it closer to an autonomous development partner rather than a reactive tool, reflecting how developers are increasingly using AI not just to write code, but to manage complex projects end to end.

AI & Machine Learning, News