Ex-OpenAI star Sutskever shoots for super-intelligent AI with new company

Enlarge / Ilya Sutskever physically gestures as OpenAI CEO Sam Altman looks on at Tel Aviv University on June 5, 2023.

On Wednesday, former OpenAI chief scientist Ilya Sutskever announced he is forming a new company called Safe Superintelligence, Inc. (SSI) with the aim of safely building ‘superintelligence’, a hypothetical form of artificial intelligence that surpasses human intelligence, possibly to the extreme.

We will pursue secure superintelligence at a glance, with one focus, one goal and one product,” Sutskever wrote about X. “We will do this through revolutionary breakthroughs produced by a small, fractured team.

Sutskever was one of the founders of OpenAI and previously served as the company’s chief scientist. Two others initially join Sutskever from SSI: Daniel Levy, who previously led the Optimization Team at OpenAI, and Daniel Gross, an AI investor who worked on machine learning projects at Apple between 2013 and 2017. The trio posted a statement about the company’s new plans. website.

A screen capture of the initial announcement of Safe Superintelligence's formation, captured on June 20, 2024.
Enlarge / A screen capture of the initial announcement of Safe Superintelligence’s formation, captured on June 20, 2024.

Sutskever and several of his colleagues resigned from OpenAI in May, six months after Sutskever played a key role in ousting OpenAI CEO Sam Altman, who later returned. While Sutskever did not publicly complain about OpenAI after his departure — and OpenAI executives like Altman wished him well in his new adventures — another retiring member of OpenAI’s Superalignment team, Jan Leike, publicly complained that “in recent years the security culture and processes [had] took a backseat to shiny products” at OpenAI. Leike later joined OpenAI competitor Anthropic in May.

A vague concept

OpenAI is currently trying to create AGI, or artificial general intelligence, that would hypothetically rival human intelligence in performing a wide range of tasks without specific training. Sutskever hopes to jump past that in a straight moonshot attempt, with no distractions along the way.

“This company is special because its first product will be the secure superintelligence, and until then it won’t do anything else,” Sutskever said in an interview with Bloomberg. “It will be completely insulated from the outside pressures that come with dealing with a large and complicated product and being stuck in a competitive rat race.”

During his previous job at OpenAI, Sutskever was part of the ‘Superalignment’ team that studied how this hypothetical form of AI, also called ‘ASI’ for ‘artificial superintelligence’, could be ‘aligned’ (shaping its behaviour) to be beneficial to be for humanity.

As you can imagine, it’s hard to line up something that doesn’t exist, so Sutskever’s search was sometimes met with skepticism. About X, Pedro Domingos, professor of computer science at the University of Washington (and frequent OpenAI critic), wrote:Ilya Sutskever’s new company is guaranteed success because superintelligence that is never achieved is guaranteed safe.

Like AGI, superintelligence is a vague term. Because the mechanisms of human intelligence are still poorly understood – and because human intelligence is difficult to quantify or define because there is no single type of human intelligence – it can be difficult to identify superintelligence when it emerges.

Computers already far surpass humans in many forms of information processing (such as basic mathematics), but are they also super intelligent? Many proponents of superintelligence envision a science fiction scenario of an “alien intelligence” with some form of sentience that operates independently of humans, which is more or less what Sutskever hopes to achieve and safely control.

“You’re talking about a gigantic super data center that develops technology autonomously,” he told Bloomberg. “That’s crazy, right? It is safety that we want to contribute to.”

Leave a Comment