The world may not have enough time to prepare for the risks posed by advanced artificial intelligence systems, according to David Dalrymple, a programme director at the UK government-backed research agency Aria. Dalrymple said the accelerating capabilities of AI could challenge human control across critical domains of society and the economy.
He warned of a widening gap between public-sector understanding and the pace of breakthroughs emerging from AI companies. Dalrymple said advanced systems could soon outperform humans in most economically valuable tasks, creating risks if governments assume reliability that has not been scientifically proven. Aria is publicly funded but operates independently, directing research into high-risk technologies, including safeguards for AI use in critical infrastructure such as energy networks.
Recent testing by the UK’s AI Security Institute found that leading models can now complete apprentice-level tasks about half the time and autonomously perform complex tasks lasting over an hour. The institute also reported that some systems demonstrated high success rates in controlled self-replication tests, though real-world risk remains limited.
Dalrymple said the priority should be mitigating potential harms as capabilities continue to advance, describing the transition as high risk if safety measures fail to keep pace.