ŌURA Launches First Proprietary LLM Focused on Women’s Health

ŌURA unveils a proprietary large language model for women’s health, combining clinical research and biometric data to deliver personalized, privacy-first AI guidance through Oura Advisor.

By Samantha Reed Edited by Maria Konash Published: Updated:
ŌURA Launches First Proprietary LLM Focused on Women’s Health
ŌURA unveils a women’s health–dedicated language model that integrates wearable biometrics and medical research for tailored insights. Photo: Jerry Kavan / Unsplash

ŌURA, maker of the Oura Ring, has announced its first proprietary large language model (LLM) built specifically for women’s health. Rolling out in testing through Oura Labs, the model enhances the Oura Advisor experience with clinically grounded, personalized guidance across the full reproductive health spectrum: from early menstrual cycles to menopause.

Unlike general-purpose AI systems, ŌURA’s custom model is trained on established medical standards, peer-reviewed research, and curated knowledge sources vetted by the company’s in-house board-certified clinicians and women’s health experts. It also integrates real-time biometric signals and long-term trends across sleep, activity, stress, cycle tracking, and pregnancy data to deliver contextual, evidence-based insights.

A Shift Toward Purpose-Built Health AI

“This custom model is a fundamental shift in how we responsibly deploy AI in health,” said Ricky Bloomfield, MD, chief medical officer at ŌURA. “Women’s health is too complex and too often overlooked to rely on one-size-fits-all systems.”

The launch represents a broader strategic pivot for ŌURA: moving beyond general AI tools and toward deeply specialized, empathetic models tailored to specific health use cases. The system is intentionally tuned to provide non-dismissive, reassuring responses, helping women better understand patterns in their own data and prepare for conversations with healthcare providers.

Chris Curry, MD, ŌURA’s clinical director of women’s health and a board-certified OB/GYN, emphasized that many women’s health questions are personal and high-stakes. For example, if a member asks why their cycle has become irregular, Oura Advisor can explain typical causes, analyze relevant personal data trends, and suggest what to discuss with a clinician, translating complex science into accessible guidance.

Establishing a New Standard

The timing reflects broader behavioral shifts. A 2025 survey found nearly 80% of U.S. adults search online for health symptoms, and almost two-thirds encounter AI-generated responses. As women increasingly turn to AI for insight into perimenopause, hormonal shifts, and fertility questions, the demand for clinically grounded and context-aware models is rising.

ŌURA’s women’s health LLM is built on the company’s own infrastructure and leverages knowledge-graph technology from webAI. Conversations remain private and are not shared or sold, aligning with the company’s stated privacy-first AI strategy.

Member-Led Testing

Participation in Oura Labs is optional, allowing members to test new features and provide feedback before broader rollout. Those who opt in contribute to refining the model’s accuracy and usefulness across areas such as fertility, pregnancy, and hormonal health,  turning individual data insights into collective learning.

AI & Machine Learning, Consumer Tech, News

What Do We Really Think About AI? This Movie Tries to Answer

A new documentary featuring top AI leaders explores the tension between optimism and fear surrounding artificial intelligence, highlighting public uncertainty about its future.

By Samantha Reed Edited by Maria Konash Published:
What Do We Really Think About AI? This Movie Tries to Answer
New AI documentary spotlights industry leaders debating risks, reflecting public uncertainty about AI’s future. Image: Sam McGhee / Unsplash

A new documentary, The AI Doc: Or How I Became an Apocaloptimist, is bringing the debate around artificial intelligence to a broader audience, exploring both the promise and anxiety surrounding the technology. Directed by filmmaker Daniel Roher alongside Charlie Tyrell, the film premiered in theaters on March 27 and follows Roher’s personal journey as he grapples with the implications of AI while preparing to become a parent.

The documentary features interviews with some of the most influential figures in AI, including Sam Altman, Dario Amodei, and Demis Hassabis. The filmmakers conducted dozens of on-camera interviews and hundreds more off the record, aiming to capture a wide range of perspectives across the industry. Despite outreach to many high-profile figures, including Mark Zuckerberg and Elon Musk, not all agreed to participate.

Rather than focusing on breaking news, the filmmakers chose to explore deeper, more enduring questions about AI. Early in production, rapid developments in the industry, including leadership turmoil at OpenAI, made it clear that chasing headlines would quickly date the film. Instead, the project centers on fundamental issues such as what AI is, how it works, and what it means for society.

Between Optimism and Fear

A central theme of the documentary is the polarized way AI is often discussed. According to the filmmakers, public perception tends to swing between two extremes: AI as a transformative force for good or as an existential threat. The film attempts to guide viewers through that tension, presenting a more nuanced view that acknowledges both possibilities.

Producers said one of the most revealing aspects of the process was asking experts to explain AI in simple terms. Even highly accomplished scientists and executives struggled to distill complex concepts into accessible explanations, underscoring the gap between technical understanding and public awareness.

A Broader Public Conversation

The filmmakers said audience reactions have highlighted how differently people perceive AI depending on their background. Screenings have sparked discussions ranging from skepticism about the technology’s impact to concerns about its concentration among a small group of companies.

The project also reflects a shift in how AI is entering public discourse. As tools like ChatGPT and Claude become more widely used, people are interacting with AI systems directly, often without fully understanding how they work or their limitations.

For the filmmakers, the takeaway is less about providing definitive answers and more about encouraging broader participation in the conversation. As AI continues to evolve rapidly, they argue that its future should not be shaped solely by technology companies, but by a wider public engaged in questioning, debating, and understanding its impact.

AI & Machine Learning, News

Anthropic Rolls Out Claude Cowork With Enterprise Controls

Anthropic has made Claude Cowork generally available across paid plans, adding enterprise controls and analytics to support company-wide AI deployment.

By Daniel Mercer Edited by Maria Konash Published: Updated:
Anthropic Rolls Out Claude Cowork With Enterprise Controls
Anthropic expands Claude Cowork with enterprise controls, analytics, and integrations for large-scale adoption. Image: Claude

Anthropic has made its Claude Cowork assistant generally available across all paid plans, alongside a new set of enterprise controls designed to support organization-wide deployment. The update reflects growing adoption of AI tools beyond engineering teams, as companies increasingly integrate assistants into everyday workflows such as reporting, research, and internal collaboration.

Claude Cowork, a desktop-based AI assistant for macOS and Windows, is positioned as a non-developer counterpart to Anthropic’s coding tools. Unlike browser-based chat interfaces, it can access local files directly and integrate with enterprise systems, enabling more context-aware workflows. Early usage data shows that the majority of activity comes from non-technical teams, including operations, marketing, finance, and legal, where employees are using the tool to handle supporting tasks around core business functions.

To support broader rollout, Anthropic has introduced governance features aimed at IT and admin teams. These include role-based access controls, allowing organizations to define which teams can use specific AI capabilities, as well as group-level spending limits to manage costs. The company has also added usage analytics, enabling administrators to track adoption patterns, active users, and workflow trends across teams.

Enterprise-Ready Controls and Visibility

The update places a strong emphasis on visibility and control. Claude Cowork now integrates with OpenTelemetry, allowing organizations to monitor AI activity through standard security and observability tools. Events such as tool usage, file access, and connector interactions can be tracked and analyzed, helping companies maintain oversight as AI becomes embedded in workflows.

Anthropic has also expanded its connector ecosystem. A new integration with Zoom enables the assistant to pull meeting summaries, transcripts, and action items directly into workflows. Administrators can configure permissions at a granular level, including restricting write access while allowing read-only interactions. These controls are designed to address concerns around data security and unintended actions by AI systems.

From Tools to Workflows

The rollout highlights a broader shift in how organizations use AI. Rather than asking isolated questions, employees are increasingly delegating multi-step tasks to assistants. Early adopters have used Claude Cowork to automate processes such as performance reviews, incident response workflows, and internal reporting dashboards by connecting the tool to systems like Slack, Jira, and internal databases.

This transition from query-based usage to task execution mirrors trends seen in developer tools, where AI agents are taking on more complex responsibilities. For Anthropic, expanding Cowork across all paid tiers positions the company to capture a wider share of enterprise demand.

As AI assistants become more deeply embedded in business operations, the focus is shifting from raw capability to governance, integration, and reliability. Claude Cowork’s expansion reflects that evolution, with Anthropic aiming to balance increased adoption with the controls needed to manage AI at scale.

Anthropic Taps CoreWeave to Scale Claude AI Deployment

Anthropic has signed a multi-year deal with CoreWeave to power its Claude AI models, expanding compute capacity as demand for AI infrastructure surges.

By Olivia Grant Edited by Maria Konash Published:
Anthropic Taps CoreWeave to Scale Claude AI Deployment
Anthropic partners with CoreWeave to scale Claude, expanding compute for rising AI demand. Image: Growtika / Unsplash

Anthropic has signed a multi-year agreement with CoreWeave to expand the infrastructure supporting its Claude family of AI models, as demand for large-scale compute continues to surge. The deal will bring additional capacity online starting later this year, enabling Anthropic to run production workloads at greater scale across its growing base of enterprise and developer customers.

The partnership positions Anthropic at the center of a rapidly expanding AI infrastructure ecosystem. As one of the leading developers of large language models, the company is increasingly reliant on high-performance cloud providers to meet rising demand. CoreWeave, which specializes in AI-optimized cloud infrastructure, will provide the computing resources needed to train, deploy, and operate Claude models in real-world applications.

Under the agreement, Anthropic will use CoreWeave’s platform to support production-scale workloads, benefiting from performance and reliability tailored for modern AI systems. The rollout will take place in phases, with the potential to expand over time as demand grows. The move adds another infrastructure partner to Anthropic’s network, which already includes major cloud and hardware providers.

Expanding Compute to Meet AI Demand

The deal reflects the scale at which Anthropic is operating. Its Claude models are being adopted across startups, enterprises, and developers, driving significant compute requirements. As AI applications move from experimentation to deployment, infrastructure has become a critical bottleneck, pushing companies to diversify their cloud partnerships.

CoreWeave said it now supports nine of the top ten AI model providers, highlighting the concentration of demand among a small group of leading developers. For Anthropic, adding capacity through CoreWeave helps ensure it can continue scaling without being constrained by a single provider or platform.

Infrastructure as Competitive Advantage

The partnership underscores a broader shift in the AI industry, where access to compute is becoming as important as model performance. Companies like Anthropic are increasingly competing not just on the quality of their models, but on their ability to deliver reliable, high-speed inference and training at scale.

Specialized AI cloud providers such as CoreWeave are emerging as key players in this landscape, offering optimized environments designed specifically for machine learning workloads. These platforms can deliver higher efficiency and performance compared with general-purpose cloud infrastructure, making them attractive partners for AI labs.

For Anthropic, the agreement is part of a broader strategy to secure the infrastructure needed to support rapid growth. The importance of this strategy is underscored by parallel moves across the industry, including Meta Platforms committing an additional $21 billion to CoreWeave for AI cloud infrastructure. As adoption of AI models accelerates, large-scale investments like these highlight how access to high-performance compute is becoming a defining factor in the competitive landscape.

AI & Machine Learning, Cloud & Infrastructure, News

OpenAI Launches $100 ChatGPT Pro Tier to Boost Codex Usage

OpenAI has introduced a $100 ChatGPT Pro tier offering 5x higher Codex usage limits, targeting developers amid rising competition in AI coding tools.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI Launches $100 ChatGPT Pro Tier to Boost Codex Usage
OpenAI unveils $100 ChatGPT Pro tier, expanding Codex access to boost developer adoption. Image: Emiliano Vittoriosi / Unsplash

OpenAI has introduced a new $100-per-month ChatGPT Pro subscription tier aimed at developers, offering significantly expanded usage limits for its Codex coding tools. The move adds a mid-range option between its $20 Plus plan and $200 Pro tier, as the company looks to attract more “vibe coders” and professional developers amid intensifying competition in AI-powered software development.

The primary appeal of the new plan is a fivefold increase in Codex usage compared to the Plus tier. Codex, OpenAI’s agentic coding system, allows users to generate, review, and execute code using natural language. Under the new pricing structure, Pro users receive substantially higher limits across both local and cloud-based tasks within rolling five-hour windows. For example, GPT-5.3-Codex usage increases from 30–150 local messages and 10–60 cloud tasks on Plus to 300–1,500 local messages and 100–600 cloud tasks on the $100 plan.

OpenAI said the new tier reflects strong demand from developers who need higher throughput for longer or more complex coding sessions. CEO Sam Altman noted the plan was introduced in response to user feedback. At the same time, the company is adjusting usage patterns on the Plus tier, shifting toward more distributed access throughout the week rather than extended daily sessions. This effectively reduces peak usage flexibility for lower-tier subscribers while encouraging upgrades.

Competing for Developers

The launch comes as competition in AI coding tools intensifies, particularly with Anthropic. Anthropic’s Claude-based coding products have gained traction in enterprise environments, contributing to rapid revenue growth and setting new benchmarks for autonomous coding systems.

OpenAI’s strategy appears designed to counter that momentum. By offering higher usage limits at a mid-tier price point, the company is targeting developers who require more capacity than casual users but may not need the full $200 plan. The move also follows OpenAI’s hiring of Peter Steinberger, creator of the OpenClaw agent framework, signaling a broader push into agent-driven development workflows.

Shifting Economics of AI Coding

The new pricing structure reflects the changing economics of AI-powered development. High-usage customers, particularly those running automated agents or working with large codebases, can quickly exceed the cost assumptions of lower-tier subscriptions.

Anthropic recently tightened restrictions on how its subscription plans can be used with third-party tools, pushing developers toward API-based pricing. OpenAI, by contrast, is positioning its plans to accommodate heavier usage directly within its subscription model.

The introduction of the $100 tier suggests a broader segmentation strategy, with pricing aligned more closely to usage intensity. As AI coding tools become central to software development workflows, companies are increasingly competing not just on model capability, but on pricing flexibility and developer experience.

The move highlights how the AI coding market is evolving into a high-stakes battleground, where access, limits, and economics may be as important as the underlying technology itself.

AI & Machine Learning, News

Will You Be Able to Invest in OpenAI? IPO Plans Suggest Yes

OpenAI to include retail investors in IPO as demand surges, highlighting shift toward broader ownership and funding for massive AI infrastructure plans.

By Samantha Reed Edited by Maria Konash Published:
Will You Be Able to Invest in OpenAI? IPO Plans Suggest Yes
OpenAI plans to reserve IPO shares for retail investors, widening access. Image: Aditya Vyas / Unsplash

OpenAI is planning to allocate a portion of shares to individual investors in its expected initial public offering, marking a notable shift in how major AI companies approach public markets. Chief Financial Officer Sarah Friar said the company saw strong demand from retail participants during its latest private funding round and intends to include them in a future listing. The move comes as OpenAI prepares for what could be one of the largest IPOs in the technology sector.

The company has already tested retail appetite through private placements facilitated by banks including JPMorgan Chase, Morgan Stanley, and Goldman Sachs. OpenAI initially aimed to raise $1 billion from individual investors but ultimately secured roughly three times that amount, underscoring intense interest in the company’s growth. According to Friar, demand was so high that one bank’s systems briefly failed after opening investor access to financial materials.

OpenAI’s valuation has surged alongside this demand. The company was recently valued at $852 billion following a record-breaking funding round, up sharply from earlier estimates. While Friar declined to confirm a specific IPO timeline, she indicated that the company is preparing operationally to function like a public entity. Reports suggest a potential listing could occur as early as the fourth quarter.

Broadening Ownership in AI

The decision to include retail investors reflects a broader effort to democratize access to high-growth AI companies, which have historically been dominated by institutional capital. Friar said widespread ownership is important for building trust, particularly as AI becomes more deeply integrated into everyday life.

The approach echoes strategies used by other high-profile companies. Friar pointed to her experience at Block, as well as examples from Tesla and SpaceX, where retail participation played a role in shaping investor engagement. SpaceX is also expected to reserve a significant portion of shares for individual investors in its anticipated IPO.

Funding the Compute Race

OpenAI’s push toward public markets is closely tied to its capital needs. The company plans to spend as much as $600 billion over the next five years on semiconductors and data centers, reflecting the growing importance of compute infrastructure in AI competition. Friar described compute as the company’s most critical asset, directly tied to product performance and revenue growth.

Enterprise adoption is also accelerating. According to company executives, enterprise customers currently account for about 40% of OpenAI’s revenue and are expected to reach parity with consumer revenue by 2026. The shift is driven by businesses moving beyond basic productivity use cases to deploying AI systems that manage complex workflows and teams of autonomous agents.

The scale of OpenAI’s ambitions highlights a broader trend across the industry, where access to capital and infrastructure is becoming a decisive factor. By opening its IPO to retail investors, the company is not only tapping a new funding source but also signaling a more inclusive approach to ownership in the AI era.

AI & Machine Learning, Enterprise Tech, News, Startups & Investment