AI Security
AI security coverage that maps real threats to real defenses – attacks, mitigations, evaluations, and standards for deploying models safely.
AI security coverage that maps real threats to real defenses – attacks, mitigations, evaluations, and standards for deploying models safely.
OpenAI banned accounts linked to Chinese law enforcement, romance scams, and influence operations, including a covert campaign targeting Japan’s prime minister, highlighting misuse of its AI tools.
The Trump administration is directing diplomats to oppose foreign data localization laws, citing risks to AI services, cloud computing, and cross-border data flows. Critics view it as a confrontational stance on global tech regulation.
Anthropic unveils a revised Responsible Scaling Policy with a Frontier Safety Roadmap, regular Risk Reports, and clearer separation between company commitments and industry recommendations.
Anthropic reveals industrial-scale campaigns by DeepSeek, Moonshot, and MiniMax to extract Claude’s capabilities via fraudulent accounts, highlighting national security and AI safety risks.
The European Parliament has disabled built-in AI tools on official devices, citing cybersecurity and privacy risks associated with uploading sensitive data to cloud services.
Crypto.com said it has become the first digital asset platform to receive ISO/IEC 42001 certification, underscoring its push to formalize AI governance and risk management.
Dubai-based Maser Group will invest $1.6 billion in agriculture and AI infrastructure across Nigeria, Ghana, and Kenya, aiming to address food security and digital demand in Africa.
OpenAI CEO Sam Altman said the creator of the viral AI agent OpenClaw is joining the company, while the project will continue as an open source initiative supported by OpenAI.
The Pentagon is pressing leading AI companies to make their tools available on classified military networks, seeking fewer usage restrictions as it expands AI deployment across defense operations.
China’s industry ministry cautioned that the popular OpenClaw AI agent could expose users to cyberattacks if misconfigured, urging stronger security measures for deployments.
OpenAI introduced Frontier, a platform enabling enterprises to build, deploy, and manage AI agents across business workflows. The system turns AI into practical coworkers that learn, act, and improve over time.
Moltbook, built on OpenClaw agentic AI, lets bots interact and form communities. Experts warn of security risks and governance challenges with AI-driven social networks.
A coalition of nonprofits urges federal suspension of xAI’s Grok AI, highlighting nonconsensual image generation, bias, and potential national security threats.
Authoritative AI coverage of cybersecurity – breakthroughs, deployments, deals, and policy – helping teams defend faster and smarter.