OpenAI puts ‘shiny products’ above security, says departing researcher

A former senior employee at OpenAI has said the company behind ChatGPT is prioritizing “shiny products” over security, revealing he quit after a disagreement over key objectives reached a “breaking point.”

Jan Leike was a key security researcher at OpenAI as co-head of superalignment, ensuring powerful artificial intelligence systems adhered to human values ​​and goals. His intervention comes ahead of a global summit on artificial intelligence next week in Seoul, where politicians, experts and tech executives will discuss oversight of the technology.

Leike resigned just days after the San Francisco-based company launched its latest AI model, GPT-4o. His departure means that two senior security figures at OpenAI have left this week following the resignation of Ilya Sutskever, co-founder of OpenAI and co-co-head of superalignment.

Leike detailed the reasons for his departure in a thread on X posted Friday, saying safety culture had become a lower priority.

“In recent years, safety culture and processes have taken a backseat to shiny products,” he wrote.

Yesterday was my last day as head of alignment, super alignment leader and supervisor @OpenAI.

— Jan Leike (@janleike) May 17, 2024

OpenAI was founded with the goal of ensuring that artificial general intelligence, which it describes as “AI systems that are generally smarter than humans,” benefits all of humanity. In his

Leike said OpenAI, which also developed the Dall-E image generator and the Sora video generator, should invest more resources in issues such as safety, social impact, confidentiality and security for the next generation of models.

“These problems are quite difficult to solve, and I fear we are not on the right track to get there,” he wrote, adding that it was becoming “increasingly difficult” for his team to conduct its research.

“Building smarter-than-human machines is an inherently dangerous endeavor. OpenAI bears an enormous responsibility on behalf of all humanity,” Leike wrote, adding that OpenAI must “become an AGI company that puts security first.”

Sam Altman, CEO of OpenAI, responded to Leike’s thread with a post on X thanking his former colleague for his contributions to the company’s security culture.

skip the newsletter promotion

“He’s right, we have a lot more to do; we are determined to do it,” he wrote.

Sutskever, who was also OpenAI’s chief scientist, wrote in his X post announced his departure that he was confident that under his current leadership, OpenAI will “build an AGI that is both secure and useful.” Sutskever initially supported Altman’s resignation as OpenAI boss last November, before backing his reinstatement after days of internal tumult at the company.

Leike’s warning came as a panel of international AI experts released an inaugural report on AI safety, saying there was disagreement over the likelihood of powerful AI systems evading human control. However, it warned that regulators could be left behind by rapid advances in technology, warning of the “potential disparity between the pace of technological progress and the pace of a regulatory response”.

Leave a Comment