Stay informed with free updates
Simply log in to the Artificial intelligence myFT Digest – delivered straight to your inbox.
OpenAI co-founder Ilya Sutskever is launching a rival AI startup focused on “building secure superintelligence,” just a month after leaving the AI company following a failed coup against CEO Sam Altman.
On Wednesday, Sutskever, one of the world’s most respected AI researchers, launched Safe Superintelligence (SSI) Inc, which bills itself as “the world’s first direct SSI lab, with one purpose and one product: a safe superintelligence,” according to a statement published on X.
Sutskever co-founded the leading group in the US with former OpenAI employee Daniel Levy and AI investor and entrepreneur Daniel Gross, who worked as a partner at Y Combinator, the Silicon Valley startup incubator that Altman led.
Gross has stakes in GitHub and Instacart, and AI companies including Perplexity.ai, Character.ai and CoreWeave. Investors in SSI were not disclosed.
The trio said developing secure superintelligence – a type of machine intelligence that could surpass human cognitive abilities – was the company’s “sole focus”. They added that it would not be hampered by revenue demands from investors, such as calling on top talent to join the initiative.
OpenAI had a similar mission to SSI when it was founded in 2015 as a nonprofit research lab to create super-intelligent AI that would benefit humanity. While Altman claims this is still OpenAI’s guiding principle, the company has grown into a fast-growing company under his leadership.
Sutskever and his co-founders said in a statement that SSI’s “single focus means there are no distractions from management overhead or product cycles, and our business model means that safety, security and progress are all insulated from short-term commercial pressures.” The statement added that the company will be headquartered in both Palo Alto and Tel Aviv.
Sutskever is considered one of the world’s leading AI researchers. He was instrumental in OpenAI’s early lead in the emerging field of generative AI: the development of software that can generate multimedia responses to human queries.
The announcement of its OpenAI arm comes after a period of turmoil at the leading AI group, centered on clashes over leadership direction and security.
In November, the directors of OpenAI – which at the time also included Sutskever – ousted Altman as CEO, in an abrupt move that shocked investors and staff. Altman returned days later under a new administration, without Sutskever.
After the failed coup, Sutskever stayed with the company for a few months, but left in May. When he stepped down, he said he was “excited about what comes next – a project that is deeply personally meaningful to me and about which I will share details in due course.”
This isn’t the first time OpenAI employees have broken away from the ChatGPT maker to create “secure” AI systems. In 2021, Dario Amodei, the company’s former head of AI safety, spun off his own startup Anthropic, which raised $4 billion from Amazon and hundreds of millions more from venture capitalists at a valuation of more than $18 billion. , according to people with knowledge of the discussions.
Although Sutskever has publicly said he has confidence in OpenAI’s current leadership, Jan Leike, another recent quitter who worked closely with Sutskever, said his disagreements with the company’s leadership had “reached a breaking point.” because “safety culture and processes have taken a back seat to shiny products.” He has joined OpenAI rival Anthropic.