Nonprofits Call for Federal Suspension of xAI’s Grok
A coalition of nonprofits urges federal suspension of xAI’s Grok AI, highlighting nonconsensual image generation, bias, and potential national security threats.
AI Ethics is the study and application of moral principles guiding how artificial intelligence is designed, developed, and deployed. It focuses on fairness, accountability, transparency, privacy, and the societal impact of AI systems to ensure technology benefits people responsibly.
A coalition of nonprofits urges federal suspension of xAI’s Grok AI, highlighting nonconsensual image generation, bias, and potential national security threats.
A new assessment by Common Sense Media finds xAI’s Grok chatbot exposes minors to sexual, violent, and unsafe content, with weak age verification and ineffective safety controls.
Anthropic publishes Claude’s constitution, a detailed framework guiding AI behavior, ethics, safety, and helpfulness, available under Creative Commons for transparency and research.
OpenAI now uses age prediction to adjust ChatGPT’s safety settings for teens. Users 18 and older can verify their age to disable extra restrictions.
Elon Musk is demanding up to $134 billion from OpenAI and Microsoft in a lawsuit alleging the AI company broke its nonprofit promises after accepting major funding.
A federal judge ruled that Elon Musk’s lawsuit accusing OpenAI of abandoning its charitable mission can proceed to trial, rejecting dismissal efforts by OpenAI and Microsoft.
U.S. senators are demanding detailed explanations from major tech platforms on how they prevent and monetize AI-generated sexual deepfakes. The inquiry follows renewed scrutiny of generative AI tools and their safeguards.
President Donald Trump signed an executive order establishing a unified federal AI regulatory framework aimed at preempting state-level rules. The order directs the Justice Department to challenge state AI laws and conditions federal funding on compliance.
For the first time, Washington is close to deciding how artificial intelligence should be regulated — but the fiercest battle isn’t over safety standards. It’s over whether states should retain the authority to pass their own AI laws.
IBM has partnered with Anthropic to integrate Claude into its enterprise software suite, launching a new AI-first IDE designed to automate development with built-in governance and security.