Tether Data Launches Medical AI Models Designed for Smartphones and Laptops

Tether Data has introduced QVAC MedPsy, a family of compact medical AI models designed for smartphones, laptops, and edge devices. The company claims the models outperform significantly larger healthcare-focused systems while reducing inference costs and token usage.

By Laura Bennett Edited by Maria Konash Published:
Tether Data launches QVAC MedPsy AI models for edge devices with strong clinical performance and lower costs. Image: Tether

Tether’s AI research division, Tether Data, has released QVAC MedPsy, a new family of text-only medical language models optimized for edge deployment. The models come in 1.7 billion and 4 billion parameter versions and are designed to run on consumer hardware including smartphones, laptops, and wearable devices while maintaining strong medical reasoning performance.

According to Tether Data, the smaller QVAC MedPsy-1.7B model achieved an average score of 62.62 across seven closed-ended medical benchmarks. The company said this outperformed Google’s MedGemma-1.5-4B-it model by more than 11 points despite using less than half the parameters. It also approached the performance of larger reasoning-focused models such as Qwen3-4B-Thinking-2507.

The larger QVAC MedPsy-4B model reportedly surpassed MedGemma-27B-text-it on several benchmarks tied to practical healthcare reasoning. On HealthBench Hard, which measures performance in more realistic clinical scenarios, Tether Data reported scores of 58.00 for MedPsy-4B compared with 42.00 for Google’s 27-billion-parameter system.

Tether Data also emphasized inference efficiency as a major differentiator. The company said MedPsy-4B generated benchmark answers using an average of roughly 909 tokens, compared with approximately 2,953 tokens for Qwen3-4B-Thinking-2507. Lower token usage reduces latency and compute costs, which is particularly important for real-time deployment on lower-power devices.

The models are being released under the Apache 2.0 license for research and educational use. Tether Data is also publishing GGUF versions compatible with llama.cpp and its own QVAC SDK, including quantized variants designed to reduce storage requirements while maintaining most benchmark performance. The company said some compressed versions cut file size by nearly 70% with minimal performance degradation.

QVAC MedPsy was evaluated across eight benchmark suites covering clinical reasoning, biomedical research, health literacy, and underserved healthcare contexts. These included MedQA-USMLE, MedMCQA, PubMedQA, AfriMedQA, and HealthBench.

Smaller Medical Models Target Real-World Deployment

The release reflects growing demand for medical AI systems that can run locally instead of relying entirely on cloud infrastructure. Most high-performing healthcare language models are too large to deploy directly on edge devices, limiting their use in low-connectivity or privacy-sensitive environments.

By reducing parameter count and token usage while maintaining benchmark performance, Tether Data is targeting practical deployment scenarios such as offline clinical assistance, medical education tools, and decision-support systems operating directly on consumer hardware.

The focus on local inference is also significant for healthcare providers dealing with strict data privacy requirements. Running models directly on devices can reduce the need to transmit sensitive patient information to remote servers, which may simplify compliance and improve response times.

Tether Expands Into AI-Driven Health Technologies

The MedPsy launch is part of a broader expansion of Tether’s AI and health technology efforts. Earlier, the company introduced BrainWhisperer, a brain-computer interface system designed to convert neural activity into text using on-device AI processing. Tether claimed the system achieved up to 98.3% accuracy while keeping neural data local to the device.

Tether has also been expanding into consumer wellness technologies through investment activity. Eight Sleep recently received a strategic investment from Tether Investments at a reported $1.5 billion valuation, with a focus on AI-driven sleep monitoring and personalized health intelligence.

AI & Machine Learning, News

Elon Musk Merges xAI Into SpaceX Under New SpaceXAI Structure

Elon Musk said xAI will cease operating as an independent company and become fully integrated into SpaceX under a new SpaceXAI structure. The move combines Musk’s AI models, compute infrastructure, and aerospace operations into a single organization.

By Maria Konash Published: Updated:
Elon Musk merges xAI into SpaceX under SpaceXAI, combining Grok, AI supercomputers, and orbital compute plans into one organization. Image: SpaceX

Elon Musk said xAI will no longer operate as an independent business and will instead be fully integrated into SpaceX under a new structure called SpaceXAI. The consolidation combines Musk’s AI models, social platform infrastructure, supercomputing operations, and aerospace systems into a single organization.

The announcement came alongside a new compute agreement between SpaceXAI and Anthropic. Under the deal, Anthropic will gain access to Colossus 1, a large AI supercomputer cluster originally developed by xAI. Musk confirmed on X that “xAI will be dissolved as a separate company” and that products including Grok will continue under the SpaceXAI name.

The restructuring follows an earlier all-stock transaction in which SpaceX acquired xAI at valuations reportedly placing SpaceX near $1 trillion and xAI around $250 billion. The combined structure is valued at roughly $1.25 trillion.

Under SpaceXAI, Grok development, AI infrastructure, and future compute projects will operate directly within SpaceX management. The move also places major infrastructure systems such as the Colossus supercomputer clusters under the same organization responsible for launch systems, satellite operations, and Starlink.

Musk acknowledged operational issues inside xAI before the restructuring, saying publicly that the company “was not built right first time around.” The reorganization follows months of executive departures at xAI, where nearly all original co-founders had reportedly left by March.

Anthropic’s agreement with SpaceXAI includes access to more than 300 megawatts of compute capacity through Colossus 1, including over 220,000 NVIDIA GPUs spanning H100, H200, and GB200 accelerators. Musk said discussions with Anthropic leadership convinced him the company was approaching AI development responsibly, though he added that SpaceXAI reserves the right to reclaim compute capacity if systems “engage in actions that harm humanity.”

SpaceXAI Centralizes Compute And Infrastructure

The consolidation gives Musk direct control over a vertically integrated AI stack that includes compute infrastructure, model development, satellite networking, and launch systems. Analysts said combining those operations under one structure could simplify capital deployment and accelerate expansion of AI infrastructure projects.

The integration is particularly significant because AI companies are increasingly constrained by access to power, GPUs, cooling systems, and data center capacity. SpaceX already controls launch infrastructure, satellite communications, and large-scale engineering operations, which could become strategically valuable if AI compute continues expanding at current rates.

The move also changes the role of Colossus infrastructure inside Musk’s AI strategy. Leasing large portions of Colossus 1 to Anthropic allows SpaceXAI to monetize existing GPU assets while focusing internal development on newer systems such as the planned Colossus 2 cluster.

Orbital Compute Becomes A Core Strategy

SpaceXAI is also pushing more aggressively into orbital AI infrastructure. Earlier plans outlined the possibility of space-based data centers powered by solar energy and supported through Starship launches, with the goal of overcoming terrestrial limitations tied to electricity availability, cooling requirements, and land use.

The restructuring places those orbital compute ambitions directly under the same organization operating Starlink and reusable launch systems. That integration could allow SpaceXAI to coordinate launch cadence, satellite networking, power systems, and AI infrastructure development more tightly than conventional cloud providers.

ChatGPT Comes to Both Excel and Google Sheets With New OpenAI Add-Ins

OpenAI has released ChatGPT integrations for Excel and Google Sheets, allowing users to build, edit, and analyze spreadsheets directly inside the applications.

By Daniel Mercer Edited by Maria Konash Published: Updated:
OpenAI brings ChatGPT add-ins to Excel and Google Sheets for AI-powered spreadsheet editing and automation. Image: Rubaitul Azad / Unsplash

OpenAI has launched ChatGPT integrations for Microsoft Excel and Google Sheets, bringing AI-powered spreadsheet generation, editing, and analysis directly into both platforms. The add-ins are available globally across consumer, education, and enterprise ChatGPT plans, including Free, Plus, Pro, Business, Enterprise, Edu, and K-12 tiers.

The integrations run as sidebar assistants inside spreadsheets and are designed to handle large, multi-tab workbooks containing formulas, references, assumptions, and linked calculations. Users can ask ChatGPT to build spreadsheets from scratch, explain unfamiliar models, update assumptions, clean formatting, remove duplicates, troubleshoot formulas, or generate scenario analyses using natural-language prompts.

OpenAI said the tools are particularly aimed at spreadsheet-heavy workflows such as budgeting, forecasting, KPI reporting, financial modeling, and operational planning. Example use cases include updating a model while preserving formatting, tracing broken formulas across sheets, generating sensitivity analyses, and summarizing changes after edits.

The company emphasized that ChatGPT for Excel and Google Sheets operates as a separate experience from standard ChatGPT conversations. Spreadsheet chats do not sync with regular chat history and currently do not have access to ChatGPT memory features. Advanced spreadsheet functionality such as VBA and macros may also have limited support.

The integrations additionally support Skills and connected apps. Skills act as reusable workflow templates that guide ChatGPT through structured spreadsheet tasks, while apps connect the assistant to external enterprise data sources. Users can invoke Skills directly within prompts or connect business systems for more context-aware spreadsheet operations.

For enterprise customers, OpenAI said the tools support compliance controls including role-based access permissions, data residency options where available, Enterprise Key Management, and integration with the Compliance API. Administrators can deploy the Excel integration internally through Microsoft 365 management tools if direct marketplace access is restricted.

OpenAI Pushes Deeper Into Productivity Software

The release expands OpenAI’s strategy of embedding ChatGPT directly into existing workplace software instead of requiring users to switch between standalone AI interfaces and productivity tools. Spreadsheets are one of the most widely used business applications, particularly in finance, operations, and analytics, making them a natural target for AI-assisted workflows.

Unlike traditional spreadsheet automation tools, the integrations focus on contextual understanding of entire workbooks rather than isolated formulas or macros. The assistant can interpret relationships between tabs, assumptions, and calculations while modifying spreadsheets through conversational instructions.

Spreadsheet AI Moves Toward Agentic Workflows

OpenAI’s description of usage-based “agentic limits” signals a broader move toward AI systems that perform multi-step operational tasks rather than simple question answering. Complex spreadsheet edits, model reviews, and workbook restructuring require persistent reasoning across large files and multiple operations.

At the same time, OpenAI included repeated warnings about reviewing outputs before relying on them, particularly for financial, legal, and tax-related work. The company acknowledged that formulas, calculations, and edits can still contain errors, reinforcing that spreadsheet AI remains assistive rather than fully autonomous for high-stakes workflows.

AI & Machine Learning, News

Peter Thiel Backs Panthalassa’s $140 Million Ocean AI Infrastructure Bet

Panthalassa has raised $140 million to scale offshore AI computing systems powered by ocean waves. The company says its floating infrastructure could help solve the energy and cooling constraints facing AI data centers.

By Olivia Grant Edited by Maria Konash Published:
Peter Thiel backs Panthalassa’s $140M round to scale ocean-powered AI computing infrastructure. Image: Jeremy Bishop / Unsplash

Panthalassa has raised $140 million in a Series B funding round led by Peter Thiel to expand its offshore AI computing and wave-energy platform. The company plans to use the funding to scale manufacturing, deploy larger ocean-based compute nodes, and move toward commercial operations beginning in 2027.

Panthalassa is developing autonomous offshore systems that generate electricity from ocean waves and use the energy directly to power onboard AI computing hardware. Instead of sending electricity back to terrestrial grids, the company runs AI workloads at sea and transmits data through low-Earth-orbit satellite networks.

The company says the model is designed to address several infrastructure constraints emerging from rapid AI growth, including grid congestion, water shortages for cooling, permitting delays, and rising opposition to large terrestrial data centers. Because the systems operate in the open ocean, surrounding seawater can also be used for passive cooling.

“The future demands more compute than we can imagine,” Thiel said. “Extra-terrestrial solutions are no longer science fiction. Panthalassa has opened the ocean frontier.”

Panthalassa has already deployed two prototypes, Ocean-1 in 2021 and Ocean-2 in 2024. The new funding will support a pilot manufacturing facility near Portland and accelerate deployment of the Ocean-3 series, scheduled for the northern Pacific Ocean in 2026.

“There are three sources of energy on the planet with tens of terawatts of new capacity potential: solar, nuclear, and the open ocean,” said Panthalassa CEO Garth Sheldon-Coulson. “We’ve built a technology platform that operates in the planet’s most energy-dense wave regions, far from shore, and turns that resource into reliable clean power.”

The financing round included investors such as John Doerr, Marc Benioff’s TIME Ventures, Max Levchin’s SciFi Ventures, Super Micro Computer, and returning backers including Founders Fund and Lowercarbon Capital.

Offshore Compute Targets AI Energy Constraints

Panthalassa’s approach differs from conventional renewable energy projects because the generated electricity is consumed directly where it is produced. By placing compute workloads offshore, the company avoids transmitting large amounts of power through already constrained electrical grids.

The strategy also targets one of the largest operational challenges in AI infrastructure: cooling. High-density GPU clusters consume enormous amounts of electricity and generate significant heat, forcing data center operators to secure large water supplies and specialized cooling systems. Panthalassa argues that open-ocean deployment removes much of that constraint.

John Doerr called the technology “a game changer in addressing global energy needs and clean power generation,” adding that it strengthens American technological leadership while creating new industrial infrastructure.

Ocean Infrastructure Moves From Prototype To Scale

The company’s next challenge will be proving that offshore compute systems can operate reliably at commercial scale in difficult ocean conditions. That includes maintaining stable power generation, satellite connectivity, and hardware performance over long periods at sea.

The Ocean-3 deployment planned for 2026 is expected to serve as Panthalassa’s first large-scale operational test before broader commercial rollout in 2027. If successful, the company could become part of a wider push to decentralize AI infrastructure away from land-constrained data center hubs.

AI & Machine Learning, Cloud & Infrastructure, News, Startups & Investment

Anthropic Expands Claude Capacity Through SpaceX Compute Partnership

Anthropic has signed a compute agreement with SpaceX that adds access to more than 220,000 NVIDIA GPUs at the Colossus 1 data center. The added capacity is already being used to raise Claude usage limits and API availability.

By Olivia Grant Edited by Maria Konash Published:
Anthropic scales Claude capacity and API access adding 220,000 Nvidia GPUs through SpaceX deal. Image: Anthropic

Anthropic has announced a compute partnership with SpaceX that gives the company access to the full capacity of the Colossus 1 AI data center. The agreement adds more than 300 megawatts of compute power and over 220,000 NVIDIA GPUs, significantly expanding the infrastructure available for Claude models.

The additional capacity is already affecting Anthropic’s products. The company said it is doubling Claude Code’s five-hour rate limits for Pro, Max, Team, and enterprise seat-based plans. It is also removing peak-hour usage reductions for Pro and Max subscribers and substantially increasing API rate limits for Claude Opus models.

According to Anthropic, the SpaceX agreement is part of a broader infrastructure expansion strategy aimed at addressing rising demand for Claude services. The company said the added GPU capacity will directly improve availability for Claude Pro and Claude Max users, who have faced tighter usage restrictions as demand for coding and reasoning workloads increased.

The Colossus 1 facility includes dense deployments of NVIDIA H100, H200, and GB200 accelerators. Anthropic said the compute cluster will support both model training and inference workloads, including Claude Code and API services.

The SpaceX agreement follows several other large-scale infrastructure deals announced by Anthropic this year. These include an agreement with Amazon for up to 5 gigawatts of AI infrastructure, including nearly 1 gigawatt expected online by the end of 2026; a 5 gigawatt partnership with Google and Broadcom beginning in 2027; a strategic infrastructure partnership involving Microsoft and NVIDIA worth up to $30 billion in Azure capacity; and a $50 billion AI infrastructure investment initiative with Fluidstack.

Usage Limits Increase As Demand Surges

The immediate product changes show how tightly compute availability is tied to user experience in large AI systems. Claude Code, which allows developers to use Claude for software engineering workflows, has become one of Anthropic’s most compute-intensive products because coding tasks often require long reasoning chains and repeated iterations.

By raising rate limits and removing peak-hour reductions, Anthropic is effectively signaling that infrastructure constraints had become a bottleneck for paid users. The increase in API capacity also matters for enterprise customers building applications on Claude Opus, Anthropic’s most capable model.

The company’s reliance on multiple hardware platforms, including AWS Trainium chips, Google TPUs, and NVIDIA GPUs, reflects a broader strategy to diversify compute supply instead of depending on a single cloud or chip provider.

AI Infrastructure Expands Beyond The US

Anthropic also said future infrastructure expansion will increasingly happen internationally, particularly for enterprise customers in regulated industries such as healthcare, government, and financial services. Many of these customers require local hosting to meet data residency and compliance rules.

The company said some of its new inference capacity through Amazon will be deployed in Asia and Europe. Anthropic also emphasized that future expansion will prioritize countries with stable legal frameworks and secure supply chains for networking, hardware, and data center infrastructure.

The announcement additionally included continued discussions with SpaceX around orbital AI compute systems. While still experimental, the idea reflects growing concern inside the AI industry that future model development could outgrow the practical limits of terrestrial power, cooling, and land availability.

AI & Machine Learning, Cloud & Infrastructure, News

Anthropic Secures Colossus Supercomputer Capacity From SpaceXAI

Anthropic has signed a deal to access SpaceXAI’s Colossus 1 supercomputer, adding more than 220,000 NVIDIA GPUs to support Claude training and inference workloads.

By Olivia Grant Edited by Maria Konash Published:
Anthropic taps Colossus 1 with 220,000 Nvidia GPUs to scale Claude and explore orbital AI computing. Image: xAI

Anthropic has signed an agreement with SpaceX’s AI infrastructure division, SpaceXAI, to access Colossus 1, a large-scale AI supercomputer built for training and operating frontier AI models. The system includes more than 220,000 NVIDIA GPUs and will provide additional compute capacity for Anthropic’s Claude models, particularly for Pro and Max subscribers.

According to the announcement, Colossus 1 was deployed in record time and combines dense clusters of NVIDIA H100, H200, and next-generation GB200 accelerators. The infrastructure is designed to support AI training, inference, multimodal systems, scientific simulations, and other high-performance computing workloads at large scale.

Anthropic said the agreement will directly increase available compute resources for Claude services. Access to GPU infrastructure has become one of the main constraints facing AI companies as larger models require substantially more training and inference capacity. The deal gives Anthropic another major compute supplier alongside its existing partnerships with cloud and infrastructure providers.

The announcement also included a longer-term initiative around orbital AI infrastructure. Anthropic expressed interest in working with SpaceXAI on multiple gigawatts of space-based compute capacity, arguing that terrestrial infrastructure may struggle to keep pace with future AI demand because of land, power, and cooling limitations.

SpaceXAI said orbital compute could become practical because of SpaceX’s launch frequency, reusable rocket economics, and satellite operations experience. The companies framed space-based AI infrastructure as a potential way to access large-scale power generation with reduced environmental and land-use impact compared with conventional hyperscale data centers.

GPU Supply Remains The Main Bottleneck

The agreement highlights how aggressively AI companies are competing for compute capacity. Training frontier models increasingly depends on securing large GPU clusters years in advance, particularly for newer accelerators such as NVIDIA’s GB200 systems.

For Anthropic, the deal is as much about inference scale as model training. Claude Pro and Max subscriptions require enough infrastructure to serve millions of user requests with low latency, especially as models become larger and more multimodal. Expanding compute access can help reduce usage limits, improve response speeds, and support larger context windows.

The size of Colossus 1 also reflects how quickly AI infrastructure projects are scaling. Clusters with hundreds of thousands of GPUs are becoming necessary to remain competitive at the frontier level, pushing infrastructure costs into tens of billions of dollars.

Orbital Compute Moves Beyond Theory

The orbital compute proposal is notable because most discussions around space-based AI infrastructure have remained conceptual. Anthropic and SpaceXAI are positioning it as a potential engineering program rather than a long-term research idea.

Still, major technical barriers remain unresolved, including thermal management, hardware maintenance, networking latency, and launch economics at hyperscale. Nevertheless, the announcement signals how seriously large AI companies are beginning to think about compute availability as a long-term strategic limitation rather than simply a cloud procurement problem.

Exit mobile version