ChatGPT can talk, but OpenAI workers certainly cannot

On Monday, OpenAI announced exciting new product news: ChatGPT can now talk like a human.

It has a cheerful, slightly endearing female voice that sounds impressively non-robotic, and a little familiar if you’ve seen a certain Spike Jonze film from 2013. “Her,” tweeted OpenAI CEO Sam Altman, referring to the film where a man falls in love with an AI assistant, voiced by Scarlett Johansson.

But the product release of ChatGPT 4o was quickly overshadowed by much bigger news from OpenAI: the resignation of the company’s co-founder and chief scientist, Ilya Sutskever, who also led the superalignment team, and that of his co-team leader. Jan Leike (who we put on the Future Perfect 50 list last year).

The dismissal did not come as a total surprise. Sutskever had been involved in the boardroom rebellion that led to Altman’s temporary resignation last year, before the CEO quickly returned to his post. Sutskever publicly regretted his actions and supported Altman’s return, but he has since been largely absent from the company, even as other members of OpenAI’s policy, alignment and security teams have left.

But what really fueled the speculation was the radio silence from former employees. Sutskever Posted a fairly typical dismissal message, stating: “I’m confident OpenAI will build AGI that is both secure and useful… I’m excited about what comes next.”

Leike… didn’t do it. His resignation message simply read: “I have resigned.” After several days of fervent speculation, he elaborated on this Friday morning, explaining that he was concerned that OpenAI had moved away from a security-focused culture.

Questions immediately arose: were they forced to leave? Is this a delayed fallout from Altman’s brief gunfight last fall? Are they resigning in protest against a secret and dangerous new OpenAI project? Speculation filled the void as no one who had ever worked at OpenAI spoke up.

It turns out there is a very clear reason for this. I have seen the extremely restrictive off-boarding agreement that contains non-disclosure and non-disparagement provisions to which former OpenAI employees are subject. It prohibits them from criticizing their former employer for the rest of their lives. Even acknowledging that the NDA exists is a violation of it.

If a departing employee refuses to sign the document, or if he or she violates the document, he or she could lose all the vested wealth earned during their time with the company, which is likely worth millions of dollars. A former employee, Daniel Kokotajlo, who posted that he left OpenAI “because he lost confidence that it would behave responsibly in the days of AGI,” has publicly confirmed that he had to hand over what would likely have been a huge sum of money . money to quit without signing the document.

While non-disclosure agreements are not uncommon in the highly competitive Silicon Valley, an employee’s already acquired equity is at risk if it is reduced or breached. For employees at startups like OpenAI, equity is an essential form of compensation, one that can dwarf the salary they earn. Threatening with potentially life-changing money is a very effective way to keep former employees quiet. (OpenAI did not respond to a request for comment.)

All of this is highly ironic for a company that initially advertised itself as OpenAI – that is, as set out in its mission statements to build powerful systems in a transparent and accountable way.

OpenAI gave up the idea of ​​open-sourcing its models long ago, due to security concerns. But now it has shed the most senior and respected members of its security team, which should lead to some skepticism about whether security is really the reason why OpenAI has become so closed.

The tech company to end all tech companies

OpenAI has long held an unusual position in technology and policy circles. Their releases, from DALL-E to ChatGPT, are often very cool, but on their own they would hardly attract the almost religious fervor with which the company is often discussed.

What sets OpenAI apart is the ambition of its mission: “to ensure that artificial general intelligence – AI systems that are generally smarter than humans – benefits all humanity.” Many employees believe that this goal is within reach; that with perhaps another decade (or even less) – and a few trillion dollars – the company will succeed in developing AI systems that eliminate most human labor.

Which, as the company itself has long said, is as risky as it is exciting.

“Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” according to a recruitment page for Leike and Sutskever’s team at OpenAI. “But the enormous power of superintelligence can also be very dangerous, and could lead to humanity’s powerlessness or even its extinction. Although superintelligence seems far away now, we think it could happen within this decade.”

If artificial superintelligence were possible within our lifetime (and experts are divided), it would obviously have enormous consequences for humanity. OpenAI has historically positioned itself as a responsible player that seeks to transcend mere commercial incentives and create AGI for the benefit of all. And they have said they are willing to do that even if it means slowing development, missing out on profit opportunities or allowing outside oversight.

“We don’t think AGI should just be a Silicon Valley thing,” OpenAI co-founder Greg Brockman told me in 2019, in the much quieter pre-ChatGPT days. “We’re talking about world-changing technology. And how do you ensure the right representation and governance there? This is actually a very important focus for us and something we really want to have broad input on.”

OpenAI’s unique corporate structure—a limited-profit company ultimately controlled by a nonprofit organization—was intended to increase accountability. “Nobody should be trusted here. I don’t have any super vote shares. I don’t want them,” Altman assured Bloomberg’s Emily Chang in 2023. “The board can fire me. I think that’s important.” (As the board discovered last November, it is could be fired Altman, but it couldn’t hold the move. After his resignation, Altman struck a deal to effectively transfer the company to Microsoft, before eventually being reinstated and most of the board resigning.)

But there was no stronger sign of OpenAI’s commitment to its mission than the prominent roles of people like Sutskever and Leike, technologists with a long history of commitment to security and a seemingly genuine willingness to ask OpenAI to change course when it does. is needed. When I said to Brockman in that 2019 interview, “You’re saying, ‘We’re going to build a general artificial intelligence,’” Sutskever intervened. “We’re going to do everything that can be done in that direction while and we’re also making sure that we do it in a safe way,” he told me.

Their departure does not herald a change in OpenAI’s mission: building artificial general intelligence remains the target. But it almost certainly heralds a change in OpenAI’s interest in security work; the company has not yet announced who will lead the superalignment team.

And it makes clear that OpenAI’s concerns about external oversight and transparency couldn’t go that deep. If you want outside oversight and opportunities for the rest of the world to factor into what you do, having former employees sign extremely restrictive NDAs isn’t exactly the way to go.

Changing the world behind closed doors

This contradiction is at the heart of what makes OpenAI extremely frustrating for those of us who care deeply about AI actually working well and benefiting humanity. Is OpenAI a vibrant if mid-sized tech company delivering a chatty personal assistant, or a trillion-dollar effort to create an AI god?

The company’s leadership says they want to transform the world, they want to be accountable when they do so, and they welcome the world’s input on how to do this fairly and wisely.

But when real money is at stake – and there are astonishing amounts of real money at stake in the race to dominate AI – it becomes clear that the world was probably never meant to get all the benefits. That a lot of input. Their process ensures that former employees – those who know the most about what’s happening within OpenAI – can’t tell the rest of the world what’s going on.

The website may have high ideals, but their termination agreements are full of harsh legal language. It is difficult to account for a company whose former employees limit themselves to saying ‘I resigned’.

ChatGPT’s new cute voice may be charming, but I’m not particularly charmed.

A version of this story originally appeared in the Future perfect newsletter. Register here!

Leave a Comment