Anthropic Partners with Teach For All to Expand AI Education Globally

Anthropic and Teach For All are launching the AI Literacy & Creator Collective, providing over 100,000 educators in 63 countries access to Claude and training to integrate AI into classrooms.

By Samantha Reed Edited by Maria Konash Published: Updated:
Anthropic Partners with Teach For All to Expand AI Education Globally
Anthropic, teach for all launch ai literacy & creator program for 100,000 teachers worldwide. Photo: Anthropic

On the same day that OpenAI launched Education for Countries, a global initiative to bring AI tools into national education systems, Anthropic announced a partnership with Teach For All to provide AI tools and training to educators in 63 countries, reaching over 100,000 teachers and 1.5 million students worldwide.

Teach For All, modeled on Teach For America, is a global network of independent organizations focused on expanding educational opportunity. The network spans entities like Teach For India, Enseña Chile, and Teach For Nigeria. While each organization is locally led, they share resources, knowledge, and collaborative learning frameworks.

Teachers as Co-Creators

A distinctive aspect of the initiative is its emphasis on educators as co-designers of AI tools rather than passive users. Teachers provide feedback and develop “Claude Artifacts,” interactive AI-powered applications such as lesson plans, games, and simulations. Wendy Kopp, CEO of Teach For All, said the partnership positions teachers to shape AI’s role in education and ensure equitable learning opportunities.

Examples of outputs include a climate education curriculum developed in Liberia and a gamified math app for Grade 6–7 students in Bangladesh. In Argentina, educators are designing digital interactive workspaces aligned with local curricula. The program aims to scale such innovations across diverse learning contexts while informing Anthropic’s product roadmap.

Program Structure

The AI LCC operates through three interconnected programs:

  • AI Fluency Learning Series: Six live sessions covering AI fluency, Claude’s capabilities, and classroom applications, attended by more than 530 educators in the first cohort.
  • Claude Connect: An ongoing online hub connecting over 1,000 educators across 60+ countries for peer-to-peer learning, prompt sharing, and collaboration.
  • Claude Lab: An innovation space for advanced users with Claude Pro access, monthly office hours with Anthropic staff, and opportunities to shape the product roadmap.

This global initiative builds on Anthropic’s prior AI education efforts, including national pilots in Iceland, partnerships in Rwanda to expand AI access, and participation in U.S.-focused AI education programs through the White House Taskforce on AI Education.

AI & Machine Learning, Consumer Tech, News

Anthropic Eyes $50B Raise as Valuation Nears $900B

Anthropic is considering a major funding round amid strong investor demand and rapid revenue growth. The potential raise could value the company at up to $900 billion.

By Samantha Reed Edited by Maria Konash Published:
Anthropic Eyes $50B Raise as Valuation Nears $900B
Anthropic eyes up to $50B raise at $850B-$900B valuation as revenue nears $40B. Image: Anthropic

Anthropic is facing intense investor demand as it considers a new funding round that could raise between $40 billion and $50 billion at a valuation of $850 billion to $900 billion. Multiple preemptive offers have been made to the company, according to sources familiar with the matter, reflecting strong interest ahead of a potential initial public offering. A final decision on whether to proceed with the round is expected at a board meeting in May.

The surge in investor interest is driven by Anthropic’s rapid revenue growth. The company recently reported an annual revenue run rate exceeding $30 billion, up from about $9 billion at the end of 2025, with some estimates placing the current figure closer to $40 billion. Much of this growth is attributed to demand for its AI coding products, including Claude Code and Cowork, which are gaining traction among enterprise users.

Anthropic’s last funding round in February valued the company at $380 billion. If the new round proceeds at the reported terms, it would more than double that valuation and bring Anthropic in line with or ahead of competitors such as OpenAI, which recently raised capital at an $852 billion valuation. Investor appetite appears to exceed supply, with some institutions reportedly seeking multibillion-dollar allocations without securing meetings with company leadership.

Investor Momentum

The scale of interest highlights the growing competition among investors to gain exposure to leading AI companies. Anthropic’s positioning in areas such as coding assistance and enterprise AI tools has made it a key target for capital allocation. The company’s ability to generate substantial revenue early in its lifecycle has further strengthened its appeal.

For investors, the potential round represents an opportunity to participate in one of the largest private funding events in the technology sector. However, the size of the valuation also raises questions about sustainability and long-term returns, particularly as the company approaches a possible public listing.

Market Context

The development comes amid a broader surge in AI investment, with major players raising large amounts of capital to fund infrastructure, research, and product expansion. Companies are competing to scale their models and capture enterprise demand across industries such as finance, healthcare, and life sciences.

Anthropic’s rapid growth and funding momentum reflect the accelerating pace of the AI market. As companies prepare for public offerings, investor expectations are increasingly tied to revenue growth and the ability to translate technical advances into commercial success.

AI & Machine Learning, News, Startups & Investment

Microsoft Defends OpenAI Deal as AI Revenue Hits $37 Billion

Microsoft says its revised OpenAI partnership strengthens flexibility while maintaining key advantages. The company reported AI revenue surpassing $37 billion amid growing multi-model demand.

By Samantha Reed Edited by Maria Konash Published:
Microsoft Defends OpenAI Deal as AI Revenue Hits $37 Billion
Microsoft says revised OpenAI deal boosts flexibility while keeping key advantages, with AI revenue topping $37B. Image: BoliviaInteligente / Unsplash

Microsoft CEO Satya Nadella defended the company’s revised partnership with OpenAI, stating the updated agreement remains beneficial despite ending exclusivity. Speaking after earnings, Nadella emphasized that Microsoft retains access to OpenAI’s intellectual property, including its most advanced models and agent technologies, through 2032. Under the new terms, Microsoft no longer pays for that access, marking a shift in how the partnership is structured.

The changes come as OpenAI expands relationships with other cloud providers, including Amazon, raising questions about Microsoft’s competitive position. Nadella dismissed concerns that the loss of exclusivity would weaken Microsoft’s standing, noting that the company continues to benefit from multiple aspects of the relationship. These include OpenAI’s commitment to spend more than $250 billion on Microsoft’s cloud services and Microsoft’s equity stake in the AI company.

Microsoft also reported strong financial performance tied to artificial intelligence. The company’s AI business has surpassed an annual revenue run rate of $37 billion, representing 123 percent year-over-year growth. Nadella highlighted that OpenAI remains a significant customer for Microsoft’s infrastructure, alongside its role as a technology partner. He also pointed to broader enterprise demand for diverse AI models rather than reliance on a single provider.

Multi-Model Strategy

Microsoft’s approach reflects a shift toward offering a range of AI models within its cloud ecosystem. Nadella said customers increasingly use multiple models depending on their needs, with more than 10,000 clients already adopting multi-model strategies. This includes access to technologies from OpenAI, Anthropic, and open-source alternatives.

This diversification reduces reliance on any single partner while positioning Microsoft as a platform provider rather than a single-model ecosystem. It also aligns with enterprise preferences for flexibility, particularly as organizations experiment with different AI capabilities across workloads.

Competitive Landscape

The revised partnership highlights changing dynamics in the AI industry, where alliances are becoming less exclusive. OpenAI’s expansion to other cloud providers and Microsoft’s parallel investments in alternative models indicate a more distributed ecosystem. Cloud providers are competing not only on infrastructure but also on the breadth of AI services they can offer.

Despite these shifts, the relationship between Microsoft and OpenAI remains deeply interconnected. Microsoft continues to rely on OpenAI’s technology for key products, while OpenAI depends on Microsoft’s infrastructure and enterprise reach. The evolving partnership suggests that future competition in AI will be shaped by overlapping collaborations rather than exclusive agreements.

AI & Machine Learning, Enterprise Tech, News

SoftBank Targets $100B IPO for AI Data Center and Robotics Venture

SoftBank is preparing a new AI and robotics company to automate data center construction, with plans for a potential $100 billion IPO. The move targets growing demand for AI infrastructure.

By Olivia Grant Edited by Maria Konash Published:
SoftBank Targets $100B IPO for AI Data Center and Robotics Venture
SoftBank’s Roze AI eyes IPO for data center robotics, targeting $100B valuation amid infrastructure boom. Image: Paul Hanaoka / Unsplash

SoftBank Group is planning to launch a new AI and robotics company, Roze, focused on automating the construction of data centers, according to reports from the Financial Times and The Wall Street Journal. The proposed business would deploy autonomous robots to build server facilities more efficiently, targeting the growing demand for infrastructure that supports artificial intelligence systems. Executives are already considering an initial public offering in the United States, potentially as early as 2026, with a target valuation of around $100 billion.

The initiative reflects SoftBank’s broader push into AI infrastructure, as the company seeks to capitalize on rising demand for computing capacity. Founder Masayoshi Son is reportedly backing the effort as part of a strategy to support large-scale AI investments, including commitments to OpenAI. SoftBank is also involved in the Stargate project, which aims to expand data center capacity in the United States, and has existing investments in robotics and digital infrastructure firms such as ABB and DigitalBridge.

Roze’s core proposition is to improve efficiency in data center construction through automation. By using robotics to handle labor-intensive processes, the company aims to reduce build times and costs for hyperscale facilities. This approach aligns with broader industry efforts to modernize industrial operations using AI and automation, as companies race to expand infrastructure for training and running large AI models.

Infrastructure Race

The planned venture highlights the increasing importance of data centers in the global AI economy. As demand for compute grows, companies are investing heavily in facilities capable of supporting advanced workloads. Automating construction could become a competitive advantage, particularly as labor shortages and rising costs slow traditional development methods.

For investors, Roze represents a bet on long-term infrastructure demand tied to AI growth. A potential $100 billion valuation would place it among the largest new listings in the technology sector, though some internal skepticism has been reported regarding both the valuation and the timeline.

Industry Backdrop

SoftBank’s move comes amid intensified competition among AI companies and infrastructure providers. Firms such as Anthropic and OpenAI are driving demand for computing capacity, while cloud providers and hardware companies expand their own investments. At the same time, large-scale projects like Stargate signal a shift toward coordinated efforts to build next-generation infrastructure.

The development also reflects a broader trend of integrating AI into industrial processes, from manufacturing to construction. If successful, Roze could position SoftBank at the intersection of robotics, AI, and infrastructure, sectors that are increasingly converging as the technology landscape evolves.

OpenAI Details ChatGPT Safety Measures to Prevent Violent Misuse

OpenAI outlined how ChatGPT detects and responds to potential threats of violence, including escalation to human reviewers and law enforcement. The update follows growing scrutiny of AI safety practices.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI Details ChatGPT Safety Measures to Prevent Violent Misuse
OpenAI details ChatGPT safety measures, escalating detected threats to human review and law enforcement. Image: Milan Malkomes / Unsplash

OpenAI has detailed how it monitors and responds to potential misuse of ChatGPT in cases involving threats of violence, outlining a multi-layered system of safeguards, detection tools, and escalation protocols. The update comes as ChatGPT faces increasing scrutiny over how they handle harmful user behavior and real-world risks. OpenAI said its models are trained to refuse requests that could enable violence while still allowing legitimate discussions for educational or informational purposes.

The company said it uses a combination of automated systems and human review to identify and assess potentially dangerous activity. These systems analyze user interactions for signals such as patterns of behavior, escalation over time, and attempts to bypass safeguards. When content is flagged, trained reviewers evaluate it in context, taking into account the broader conversation and user intent. OpenAI said this approach helps distinguish between benign discussions and credible threats, which may not always be clear from a single message.

If a violation is confirmed, OpenAI may revoke access to its services, including banning accounts and preventing users from creating new ones. In more serious cases, where there is evidence of imminent and credible risk, the company said it may notify law enforcement. The process involves additional review and consultation with experts, including specialists in mental health and behavioral risk assessment. OpenAI also said it directs users in distress to crisis resources and encourages contact with professionals or trusted individuals.

Risk Detection and Response

The framework reflects an effort to balance user privacy with public safety. OpenAI emphasized that most enforcement actions remain internal, but escalation pathways exist for higher-risk scenarios. The company also highlighted improvements in detecting subtle warning signs across longer conversations, where risk may emerge gradually rather than through explicit statements.

New features, such as parental controls and a planned trusted contact system, aim to provide additional safeguards for younger users and individuals who may need support. These tools are designed to alert designated contacts in limited cases where serious risk is detected, while maintaining privacy protections.

Evolving Safety Standards

The announcement comes amid broader industry and regulatory focus on AI safety, particularly following incidents involving misuse of generative AI tools. Companies are under pressure to demonstrate clear policies and effective enforcement mechanisms as AI systems become more widely adopted.

OpenAI said it continues to refine its models, detection systems, and review processes based on real-world usage and expert input. The company acknowledged the challenges in distinguishing harmful intent from legitimate use, noting that safety measures will need to evolve alongside increasingly sophisticated attempts to bypass safeguards.

AI & Machine Learning, News, Regulation & Policy

Families Sue OpenAI Over Canada School Shooting and ChatGPT Warnings

Families of victims in a Canadian school shooting have sued OpenAI, alleging it failed to alert authorities about warning signs in ChatGPT conversations. The case raises questions about AI oversight and duty of care.

By Samantha Reed Edited by Maria Konash Published:
Families Sue OpenAI Over Canada School Shooting and ChatGPT Warnings
OpenAI faces lawsuit over alleged failure to act on ChatGPT warning signs in Canada school shooting case. Image: Caroline Attwood / Unsplash

Families of victims from a mass shooting in Canada have filed lawsuits against OpenAI and its CEO Sam Altman in a California court, alleging the company failed to act on warning signs detected in ChatGPT conversations. The legal action follows a February attack in Tumbler Ridge, British Columbia, where eight people, including six children, were killed. According to the filings, OpenAI’s internal safety team had flagged the suspect’s interactions months before the incident for references to gun violence. The company did not notify law enforcement at the time.

The lawsuits claim that OpenAI had sufficient evidence to anticipate a potential threat and that internal recommendations to alert authorities were not followed. Plaintiffs allege that senior leadership overruled those recommendations, citing concerns about reputational and financial risks. OpenAI has denied these claims, stating it enforces a zero-tolerance policy on violent misuse of its tools and has since strengthened its safeguards, including improved threat assessment and escalation procedures. Altman previously issued a public apology, acknowledging the company did not contact authorities and expressing regret over the outcome.

Legal representatives for the families argue that OpenAI’s actions constitute negligence and contributed to the attack by failing to intervene. They also claim that the suspect was able to continue using ChatGPT after being flagged, though OpenAI disputes this and says it takes steps to prevent banned users from reaccessing its services. The case consolidates earlier legal efforts in Canada and is expected to expand, with additional lawsuits planned and jury trials requested.

Accountability Questions

The lawsuits raise broader questions about the responsibilities of AI companies when user behavior suggests potential harm. As AI systems become more widely used, determining when and how companies should escalate threats to authorities is emerging as a key legal and ethical issue. The case could influence how firms design monitoring systems and define thresholds for intervention.

For businesses deploying AI tools, the outcome may shape expectations around liability and risk management. Companies may face increased pressure to demonstrate clear protocols for handling dangerous or illegal activity identified through their platforms.

Legal and Industry Context

The case comes amid growing scrutiny of AI safety practices and regulatory frameworks. Governments are increasingly examining how AI providers manage harmful use cases, particularly in areas involving violence or public safety. OpenAI has said it is working with authorities to improve coordination and prevent future incidents.

The lawsuits also coincide with other investigations into AI-related incidents, including a separate criminal probe in the United States involving alleged misuse of ChatGPT. Together, these developments underscore the evolving legal landscape for AI companies as they navigate the balance between user privacy, platform responsibility, and public safety.

AI & Machine Learning, News