The world is ill-prepared for breakthroughs in artificial intelligence, according to a group of senior experts, including two ‘godfathers’ of AI, who warn that governments have not made sufficient progress in regulating the technology.
A shift by tech companies to autonomous systems could “massively increase” the impact of AI and governments need safety regimes that trigger regulatory action if products reach a certain level of competence, the group said.
The recommendations were made by 25 experts, including Geoffrey Hinton and Yoshua Bengio, two of the three ‘godfathers of AI’ who won the ACM Turing Award – the computer science equivalent of the Nobel Prize – for their work.
The intervention comes as politicians, experts and tech executives prepare for a two-day summit in Seoul on Tuesday.
The academic paper, called “managing extreme AI risks amid rapid progress,” recommends government security frameworks that will impose stricter requirements as the technology rapidly evolves.
It also calls for more funding for newly established bodies such as the UK and US AI Safety Institutes; forcing tech companies to implement stricter risk controls; and limiting the use of autonomous AI systems in important societal roles.
“Society’s response, despite promising first steps, is out of proportion to the possibility of rapid, transformative progress expected by many experts,” said the paper published Monday in the journal Science. “Security research in the field of AI is lagging behind. Current governance initiatives lack the mechanisms and institutions to prevent abuse and recklessness, and hardly address autonomous systems.”
A global AI safety summit at Bletchley Park in Britain last year saw a voluntary testing agreement with tech companies including Google, Microsoft and Mark Zuckerberg’s Meta, while the EU has introduced an AI law and in the US an executive order from the White House has issued new AI safety requirements.
The article says that advanced AI systems – technology that performs tasks typically associated with intelligent beings – could help cure diseases and raise living standards, but also pose the threat of eroding social stability and enabling automated warfare is made. However, it warns that the technology industry’s move towards the development of autonomous systems poses an even greater threat.
“Companies are shifting their focus to developing generalist AI systems that can act autonomously and pursue goals. Increases in capabilities and autonomy could soon dramatically increase the impact of AI, with risks including large-scale social harm, malicious use, and an irreversible loss of human control over autonomous AI systems,” the experts said, adding that unchecked AI advances could lead to the ‘marginalization or extinction of humanity’.
The next stage in the development of commercial AI is ‘agentic’ AI, the term for systems that can act autonomously and, in theory, carry out and complete tasks such as booking holidays.
Last week, two tech companies offered a glimpse of that future with OpenAI’s GPT-4o, which can make real-time voice conversations, and Google’s Project Astra, which could use a smartphone camera to identify locations and read and explain computer code. and create alliterative sentences.
Other co-authors of the proposals include best-selling author of Sapiens, Yuval Noah Harari, the late Daniel Kahneman, a Nobel laureate in economics, Sheila McIlraith, a professor of AI at the University of Toronto, and Dawn Song, a professor at the University of Toronto. University of California, Berkeley. The paper published on Monday is a peer-reviewed update of the initial proposals submitted before the Bletchley meeting.