OpenAI Introduces GPT-5.4-Cyber, Expands Trusted Access Program
OpenAI is scaling its Trusted Access for Cyber program and introducing GPT-5.4-Cyber to support vetted defenders as AI-driven security risks accelerate.
OpenAI is scaling its Trusted Access for Cyber program and introducing GPT-5.4-Cyber to support vetted defenders as AI-driven security risks accelerate.
Florida’s attorney general is investigating OpenAI over claims ChatGPT was used in planning a 2025 mass shooting. The case raises new concerns about AI safety and regulation.
OpenAI has introduced a policy blueprint aimed at strengthening U.S. child safety protections in the age of AI. The framework focuses on laws, reporting standards, and built-in safeguards.
Google is adding new mental health features to Gemini, including crisis detection tools and direct hotline access. The company is also committing $30 million to expand global support services.
Anthropic has signed an agreement with the Australian government to collaborate on AI safety and research. The deal includes funding for scientific institutions and expanded use of Claude in healthcare and education.
A Dutch court has banned xAI’s Grok from generating non-consensual sexual images, marking a major regulatory action against AI content tools in Europe.
Anthropic unveils a revised Responsible Scaling Policy with a Frontier Safety Roadmap, regular Risk Reports, and clearer separation between company commitments and industry recommendations.
The UK government will now regulate AI chatbots under the Online Safety Act, requiring platforms like ChatGPT and Google Gemini to block illegal content or face fines.
OpenAI’s VP of product policy, Ryan Beiermeister, was fired following a sex discrimination allegation, after reportedly opposing the planned ChatGPT “adult mode” feature.
A coalition of nonprofits urges federal suspension of xAI’s Grok AI, highlighting nonconsensual image generation, bias, and potential national security threats.
A new Anti-Defamation League study finds Elon Musk-backed xAI’s Grok chatbot performed worst among six major AI models at identifying antisemitic and extremist content.
A new assessment by Common Sense Media finds xAI’s Grok chatbot exposes minors to sexual, violent, and unsafe content, with weak age verification and ineffective safety controls.
The European Union investigates Elon Musk’s X after Grok AI generated sexualised deepfake images, raising global regulatory concerns.
Utah launches an AI pilot for prescription renewals, letting algorithms handle routine medication management without physicians, highlighting regulatory and safety challenges.
Anthropic publishes Claude’s constitution, a detailed framework guiding AI behavior, ethics, safety, and helpfulness, available under Creative Commons for transparency and research.