OpenAI Acquires Promptfoo to Strengthen AI Security Tools

OpenAI is acquiring AI security platform Promptfoo to enhance testing, safety, and governance tools for enterprise AI systems. The technology will be integrated into OpenAI’s Frontier platform for AI coworkers.

By Maria Konash Published:
OpenAI acquires Promptfoo to add AI security testing and evaluation tools to its Frontier platform for enterprise AI agents. Photo: fabio / Unsplash

OpenAI has announced plans to acquire Promptfoo, an AI security platform focused on identifying vulnerabilities in large language model applications during development. The company said Promptfoo’s technology will be integrated into OpenAI Frontier, its platform designed for building and operating AI coworkers in enterprise environments.

Promptfoo provides tools that help organizations evaluate, test, and secure AI systems before deployment. These capabilities are increasingly important as enterprises begin deploying AI agents into operational workflows that interact with sensitive data, internal systems, and external applications.

The acquisition aims to strengthen OpenAI’s ability to support enterprise customers that require structured approaches to evaluating agent behavior, identifying risks, and maintaining oversight over AI systems.

“Promptfoo brings deep engineering expertise in evaluating, securing, and testing AI systems at enterprise scale,” said Srinivas Narayanan, OpenAI’s chief technology officer for B2B applications. “Their work helps businesses deploy secure and reliable AI applications, and we’re excited to bring these capabilities directly into Frontier.”

Promptfoo was founded by Ian Webster and Michael D’Angelo and has developed a widely used open-source command-line interface and library for testing and red-teaming large language model applications. According to OpenAI, the platform is already used by more than 25 percent of Fortune 500 companies.

Security and Governance for AI Agents

OpenAI said Promptfoo’s technology will enable several new capabilities within the Frontier platform. Automated security testing and red-teaming tools will help enterprises identify risks such as prompt injection attacks, jailbreak attempts, data leakage, and misuse of connected tools.

The integration will also embed security testing directly into development workflows, allowing teams to identify vulnerabilities earlier in the development process. OpenAI said this approach will help organizations deploy AI agents with stronger safety and reliability controls.

Another key component involves oversight and compliance features. Frontier will include integrated reporting and traceability tools designed to help enterprises document testing procedures, monitor system changes, and meet regulatory governance requirements.

Promptfoo’s founders said the move will allow the platform to expand its capabilities as AI systems become more integrated with real-world data and business operations.

“We started Promptfoo because developers needed a practical way to secure AI systems,” said Ian Webster, co-founder and chief executive of Promptfoo. “As AI agents become more connected to real data and systems, securing and validating them is more challenging and important than ever.”

OpenAI said it plans to continue supporting Promptfoo’s open-source tools while expanding enterprise security capabilities through the Frontier platform. The acquisition reflects growing demand among organizations for robust testing and governance tools as AI agents move from experimentation into production environments.

AI & Machine Learning, Cybersecurity & Privacy, News, Startups & Investment

OpenAI Details ChatGPT Safety Measures to Prevent Violent Misuse

OpenAI outlined how ChatGPT detects and responds to potential threats of violence, including escalation to human reviewers and law enforcement. The update follows growing scrutiny of AI safety practices.

By Daniel Mercer Edited by Maria Konash Published:
OpenAI details ChatGPT safety measures, escalating detected threats to human review and law enforcement. Image: Milan Malkomes / Unsplash

OpenAI has detailed how it monitors and responds to potential misuse of ChatGPT in cases involving threats of violence, outlining a multi-layered system of safeguards, detection tools, and escalation protocols. The update comes as ChatGPT faces increasing scrutiny over how they handle harmful user behavior and real-world risks. OpenAI said its models are trained to refuse requests that could enable violence while still allowing legitimate discussions for educational or informational purposes.

The company said it uses a combination of automated systems and human review to identify and assess potentially dangerous activity. These systems analyze user interactions for signals such as patterns of behavior, escalation over time, and attempts to bypass safeguards. When content is flagged, trained reviewers evaluate it in context, taking into account the broader conversation and user intent. OpenAI said this approach helps distinguish between benign discussions and credible threats, which may not always be clear from a single message.

If a violation is confirmed, OpenAI may revoke access to its services, including banning accounts and preventing users from creating new ones. In more serious cases, where there is evidence of imminent and credible risk, the company said it may notify law enforcement. The process involves additional review and consultation with experts, including specialists in mental health and behavioral risk assessment. OpenAI also said it directs users in distress to crisis resources and encourages contact with professionals or trusted individuals.

Risk Detection and Response

The framework reflects an effort to balance user privacy with public safety. OpenAI emphasized that most enforcement actions remain internal, but escalation pathways exist for higher-risk scenarios. The company also highlighted improvements in detecting subtle warning signs across longer conversations, where risk may emerge gradually rather than through explicit statements.

New features, such as parental controls and a planned trusted contact system, aim to provide additional safeguards for younger users and individuals who may need support. These tools are designed to alert designated contacts in limited cases where serious risk is detected, while maintaining privacy protections.

Evolving Safety Standards

The announcement comes amid broader industry and regulatory focus on AI safety, particularly following incidents involving misuse of generative AI tools. Companies are under pressure to demonstrate clear policies and effective enforcement mechanisms as AI systems become more widely adopted.

OpenAI said it continues to refine its models, detection systems, and review processes based on real-world usage and expert input. The company acknowledged the challenges in distinguishing harmful intent from legitimate use, noting that safety measures will need to evolve alongside increasingly sophisticated attempts to bypass safeguards.

AI & Machine Learning, News, Regulation & Policy

Families Sue OpenAI Over Canada School Shooting and ChatGPT Warnings

Families of victims in a Canadian school shooting have sued OpenAI, alleging it failed to alert authorities about warning signs in ChatGPT conversations. The case raises questions about AI oversight and duty of care.

By Samantha Reed Edited by Maria Konash Published:
OpenAI faces lawsuit over alleged failure to act on ChatGPT warning signs in Canada school shooting case. Image: Caroline Attwood / Unsplash

Families of victims from a mass shooting in Canada have filed lawsuits against OpenAI and its CEO Sam Altman in a California court, alleging the company failed to act on warning signs detected in ChatGPT conversations. The legal action follows a February attack in Tumbler Ridge, British Columbia, where eight people, including six children, were killed. According to the filings, OpenAI’s internal safety team had flagged the suspect’s interactions months before the incident for references to gun violence. The company did not notify law enforcement at the time.

The lawsuits claim that OpenAI had sufficient evidence to anticipate a potential threat and that internal recommendations to alert authorities were not followed. Plaintiffs allege that senior leadership overruled those recommendations, citing concerns about reputational and financial risks. OpenAI has denied these claims, stating it enforces a zero-tolerance policy on violent misuse of its tools and has since strengthened its safeguards, including improved threat assessment and escalation procedures. Altman previously issued a public apology, acknowledging the company did not contact authorities and expressing regret over the outcome.

Legal representatives for the families argue that OpenAI’s actions constitute negligence and contributed to the attack by failing to intervene. They also claim that the suspect was able to continue using ChatGPT after being flagged, though OpenAI disputes this and says it takes steps to prevent banned users from reaccessing its services. The case consolidates earlier legal efforts in Canada and is expected to expand, with additional lawsuits planned and jury trials requested.

Accountability Questions

The lawsuits raise broader questions about the responsibilities of AI companies when user behavior suggests potential harm. As AI systems become more widely used, determining when and how companies should escalate threats to authorities is emerging as a key legal and ethical issue. The case could influence how firms design monitoring systems and define thresholds for intervention.

For businesses deploying AI tools, the outcome may shape expectations around liability and risk management. Companies may face increased pressure to demonstrate clear protocols for handling dangerous or illegal activity identified through their platforms.

Legal and Industry Context

The case comes amid growing scrutiny of AI safety practices and regulatory frameworks. Governments are increasingly examining how AI providers manage harmful use cases, particularly in areas involving violence or public safety. OpenAI has said it is working with authorities to improve coordination and prevent future incidents.

The lawsuits also coincide with other investigations into AI-related incidents, including a separate criminal probe in the United States involving alleged misuse of ChatGPT. Together, these developments underscore the evolving legal landscape for AI companies as they navigate the balance between user privacy, platform responsibility, and public safety.

AI & Machine Learning, News

OpenAI Turns to Amazon While Loosening Microsoft Ties

OpenAI is deepening ties with Amazon while restructuring its long-standing partnership with Microsoft. The shift reflects growing demand for flexible cloud access and AI infrastructure.

By Maria Konash Published:
OpenAI expands Amazon ties as Microsoft deal shifts, signaling move to multi-cloud AI infrastructure. Image: José Ramos / Unsplash

OpenAI is expanding its relationship with Amazon while simultaneously restructuring its long-standing partnership with Microsoft. The company’s revenue chief, Denise Dresser, said the two developments are unrelated, but analysts view them as part of a broader shift in OpenAI’s cloud strategy. The changes come as AI companies seek greater flexibility to deploy models across multiple infrastructure providers amid surging demand for compute capacity.

OpenAI’s collaboration with Amazon has expanded rapidly in recent months. The companies disclosed a $38 billion commitment for cloud services through Amazon Web Services, followed by Amazon’s pledge to invest up to $50 billion in OpenAI. As part of the arrangement, OpenAI plans to use AWS infrastructure, including custom Trainium chips, and has increased its total spending commitment with Amazon by an additional $100 billion. The partnership also includes joint development of customized AI models for Amazon’s internal teams and products.

At the same time, OpenAI has revised key elements of its agreement with Microsoft. The updated terms remove Microsoft’s exclusive access to OpenAI’s intellectual property and allow OpenAI to serve customers across multiple cloud providers, including Amazon and Google. Revenue-sharing payments from OpenAI to Microsoft will continue through 2030 but are now subject to a cap, while Microsoft will no longer pay a revenue share to OpenAI. The companies maintain that Microsoft remains a primary cloud partner, with OpenAI products still launching first on Azure in most cases.

Strategic Realignment

The evolving partnerships highlight a shift toward multi-cloud strategies in AI. OpenAI’s earlier reliance on Microsoft’s Azure platform is giving way to a more diversified approach, allowing the company to reach enterprise customers across different environments. This flexibility is increasingly important as businesses standardize on different cloud providers and expect interoperability.

For cloud providers, access to leading AI models has become a competitive priority. Amazon’s deeper integration with OpenAI enables it to offer customers direct access to widely used models, while Microsoft continues to leverage its early investment and infrastructure ties. The result is a more fluid ecosystem in which partnerships are less exclusive and more transactional.

Industry Dynamics

The changes reflect broader trends in the AI industry, where infrastructure constraints are driving collaboration even among competitors. Both OpenAI and rivals like Anthropic are securing capacity from multiple cloud providers to meet demand for training and inference workloads. At the same time, cloud companies are diversifying their model offerings, integrating technologies from multiple AI developers.

Despite signs of tension, the relationships remain interdependent. Microsoft continues to be a major investor in OpenAI, while OpenAI relies on its infrastructure and enterprise reach. Similarly, Amazon’s growing role does not replace existing partnerships but adds another layer to the ecosystem. The shift suggests that the future of AI infrastructure will be defined by overlapping alliances rather than exclusive deals.

AI & Machine Learning, Cloud & Infrastructure, News, Startups & Investment

Selloff Hits AI Sector as OpenAI Faces Growth Scrutiny

AI stocks fell sharply after reports that OpenAI missed growth targets and raised internal financial concerns. The news triggered a ripple effect across cloud and chip companies.

By Samantha Reed Edited by Maria Konash Published:
OpenAI growth concerns spark AI stock selloff affecting cloud providers chipmakers and investors amid rising scrutiny of revenue and compute costs. Image: Maxim Hopman / Unsplash

Shares of artificial intelligence companies dropped on April 28 after a report by The Wall Street Journal revealed that OpenAI had missed internal targets for user growth and revenue. The report also cited concerns from CFO Sarah Friar about the company’s ability to sustain future spending on large-scale computing contracts. The news triggered a broad selloff across AI-related stocks, reflecting investor sensitivity to growth signals in the sector. The reaction comes as OpenAI prepares for a potential initial public offering that could value the company at up to $1 trillion.

The impact was felt across companies closely tied to OpenAI’s ecosystem. Shares of Oracle fell 3.4% amid concerns about financing its large data center commitments, including a reported $300 billion cloud deal with OpenAI. CoreWeave, which recently signed a multibillion-dollar contract with OpenAI, also declined. Meanwhile, Arm Holdings dropped more than 6%, reflecting broader pressure on chipmakers linked to AI demand.

Investor reaction extended beyond U.S. markets. SoftBank Group, a major OpenAI backer, saw its shares fall nearly 10% in Tokyo trading. The company has committed billions in funding to OpenAI and has restructured its portfolio to support those investments, including reducing stakes in other technology firms. Market participants expressed concern about the sustainability of such commitments if OpenAI’s growth slows.

Market Reaction

The selloff highlights how closely valuations across the AI sector are tied to expectations around leading companies. OpenAI’s position at the center of the ecosystem means that changes in its outlook can influence sentiment across cloud providers, chip manufacturers, and investors. Even companies with indirect exposure may experience volatility as markets reassess demand for AI infrastructure.

For investors, the episode underscores the risks associated with rapid expansion in AI. Large-scale investments in data centers and computing capacity depend on sustained growth in usage and revenue. Any indication of slower adoption can quickly translate into broader market corrections.

Industry Context

The development comes at a time when AI companies are investing heavily in infrastructure and preparing for major funding events. OpenAI’s anticipated IPO and large-scale partnerships have positioned it as a key driver of industry momentum. At the same time, competitors and partners are committing billions to support AI workloads, increasing financial exposure across the ecosystem.

Recent deals involving cloud providers and semiconductor companies reflect a broader trend toward long-term infrastructure commitments. As the market matures, investors are beginning to scrutinize whether demand will keep pace with spending. The reaction to OpenAI’s reported challenges suggests that confidence in the sector remains closely tied to the performance of its leading players.

AI & Machine Learning, News, Startups & Investment

Musk vs. OpenAI: Trial Tests Nonprofit Vision vs. Commercial Reality

A high-profile trial between Elon Musk and OpenAI leaders has begun, centering on claims the company abandoned its nonprofit mission. The case could reshape governance in leading AI firms.

By Samantha Reed Edited by Maria Konash Published:
Musk-OpenAI trial probes nonprofit roots and commercialization, shaping AI governance and competition. Image: Tingey Injury Law Firm / Unsplash

A trial involving Elon Musk and Sam Altman has begun in California, focusing on the origins and structure of OpenAI. The case centers on Musk’s claim that the organization deviated from its nonprofit mission when it established a commercial arm in 2018. Musk, a co-founder and early donor, argues that the shift represents a breach of charitable trust. OpenAI disputes this, framing the lawsuit as a competitive move by Musk, who now leads rival AI ventures.

Musk testified that the dispute is about protecting the integrity of charitable organizations, stating that allowing such transitions could undermine public trust. His legal team emphasized his early contributions, including tens of millions of dollars in funding during OpenAI’s nonprofit phase. Musk is seeking billions in damages, which his lawyers say should be directed back into the organization’s nonprofit activities. He is also calling for governance changes, including leadership restructuring.

OpenAI’s legal team countered that Musk supported the company’s evolution before leaving and is now attempting to weaken a competitor. They argued that he pushed for greater control over the organization, including proposals to integrate it with Tesla. When those efforts failed, OpenAI claims, Musk distanced himself from the company. The defense also highlighted Musk’s later involvement in AI through xAI, suggesting the lawsuit is tied to competitive pressures.

Legal and Industry Implications

The case raises questions about how AI organizations balance nonprofit origins with the need for large-scale funding and commercialization. Many leading AI firms have adopted hybrid structures to attract investment while maintaining stated public-interest goals. A ruling in Musk’s favor could prompt stricter scrutiny of such arrangements and influence how future AI ventures are structured.

For the broader industry, the trial reflects intensifying competition among AI developers. As companies race toward advanced systems, governance and funding models are becoming as critical as technical progress. The outcome may shape investor expectations and regulatory approaches to AI development.

Background and Context

OpenAI was founded in 2015 as a nonprofit with a mission to develop AI for public benefit, before introducing a for-profit entity to scale its operations. The decision helped fuel the development of products like ChatGPT and positioned the company at the center of the commercial AI market. Musk’s departure from OpenAI preceded this shift, though both sides disagree on the extent of his involvement in the decision.

The trial also unfolds amid broader tensions in the AI sector, where leading figures increasingly compete across overlapping domains. A verdict is expected in late May, and could set a precedent for how disputes over AI governance and commercialization are handled in the future.

AI & Machine Learning, News, Regulation & Policy
Exit mobile version