Big tech has distracted the world from the existential risk of AI, says top scientist

Big tech has succeeded in distracting the world from the existential risk to humanity that artificial intelligence still poses, a leading scientist and AI campaigner has warned.

Speaking to the Guardian at the AI ​​Summit in Seoul, South Korea, Max Tegmark said the shift in focus from the extinction of life to a broader view of the safety of artificial intelligence risks an unacceptable delay in imposing strict regulations on the creators of the most powerful programs.

“In 1942, Enrico Fermi built the very first reactor with a self-sustaining nuclear chain reaction under a football field in Chicago,” said Tegmark, who had a physics degree. “When the top physicists at the time found out, they really panicked because they realized that the biggest hurdle remaining to building an atomic bomb had just been overcome. They realized it would only be a few more years – and in fact it was three years, with the Trinity test in 1945.

“AI models that can pass the Turing test [where someone cannot tell in conversation that they are not speaking to another human] are the same warning for the kind of AI you can lose control over. That’s why people like Geoffrey Hinton and Yoshua Bengio – and even many tech CEOs, at least privately – are panicking now.”

Tegmark’s nonprofit Future of Life Institute led the call last year for a six-month “pause” in advanced AI research over these fears. The launch of OpenAI’s GPT-4 model in March that year was the canary in the coal mine, he said, and proved the risk was unacceptably small.

Despite thousands of signatures from experts, including Hinton and Bengio, two of the three “godfathers” of AI who pioneered the machine learning approach that underpins the field today, no break was agreed.

Instead, the AI ​​summits, of which Seoul is the second after Britain’s Bletchley Park last November, have provided leadership in the fledgling field of AI regulation. “We wanted that letter to legitimize the conversation, and we’re very happy with how that turned out. Once people saw that people like Bengio were worried, they thought, “It’s okay for me to worry about it.” Even the guy at my gas station told me afterwards that he was worried AI would replace us.

“But now we have to move from just talking to walking.”

However, since the first announcement of what would become the Bletchley Park summit, the focus of international AI regulation has shifted away from existential risks.

In Seoul, only one of the three “high” groups focused directly on security, and it looked at the “full spectrum” of risks, “from privacy breaches to labor market disruptions and potential catastrophic consequences.” Tegmark argues that downplaying the most serious risks is not healthy – and not coincidentally.

“That’s exactly what I predicted would happen through industry lobbying,” he said. ‘In 1955 the first magazine articles appeared claiming that smoking causes lung cancer, and you would think that some regulation would soon come. But no, it wasn’t until 1980 because there was a huge push to distract from the industry. I feel like this is happening now.

skip the newsletter promotion

“Of course AI also causes current harm: there is bias, it harms marginalized groups… But as [the UK science and technology secretary] Michelle Donelan herself said: It’s not that we can’t deal with both. It’s kind of like saying, “Let’s not pay attention to climate change because there’s a hurricane coming this year, so we should just focus on the hurricane.”

Tegmark’s critics have made the same argument for his own claims: that the industry wants everyone to talk about hypothetical risks in the future to distract from concrete harm in the present, a charge he rejects. “Even if you think about it in isolation, it’s quite a galaxy: it would be quite 4D chess for someone like [OpenAI boss] Sam Altman, to bypass regulations, to tell everyone that the lights can go out for everyone and then convince people like us to sound the alarm.”

Instead, he argues, the muted support from some tech leaders stems from the fact that “I think they all feel like they’re stuck in an impossible situation where even if they want to quit, they can’t. If a CEO of a tobacco company wakes up one morning and thinks what he is doing is wrong, what will happen? They will replace the CEO. So the only way you can put safety first is if the government implements safety standards for everyone.”

Leave a Comment