OpenAI is building an international team of lobbyists to influence politicians and regulators to increase their oversight of powerful artificial intelligence.
The San Francisco-based startup told the Financial Times that it has expanded the headcount of its Global Affairs team from three to 35 in early 2023. The company aims to build that number to 50 by the end of 2024.
The push comes as governments explore and debate legislation around AI safety, which risks limiting the startup’s growth and development of its advanced models, which underpin products like ChatGPT.
“We’re not approaching this from the perspective of, we should just step in and destroy the regulations. . . because we have no goal of maximizing profits; we aim to ensure that AGI benefits all of humanity,” said Anna Makanju, vice president of government affairs at OpenAI, referring to artificial general intelligence, or the point that machines have cognitive skills equivalent to humans.
Although the Global Affairs department makes up a small portion of OpenAI’s 1,200 employees, it is the company’s most international department, strategically positioned in locations where AI legislation is progressive. This includes stationing staff in Belgium, Great Britain, Ireland, France, Singapore, India, Brazil and the US.
However, OpenAI lags behind its Big Tech rivals in this outreach. According to public documents in the US, Meta spent a record $7.6 million on contacts with the US government in the first quarter of this year, while Google spent $3.1 million and OpenAI spent $340,000. In terms of AI-specific advocacy, Meta has named fifteen lobbyists, Google has five, while OpenAI has only two.
“When you walk through the door, [ChatGPT had] 100 million users [but the company had] three people to do public policy,” said David Robinson, head of policy planning at OpenAI, who joined the company last May after a career in academia and consulting for the White House on its AI policy.
“It literally got to the point where there was someone at a high level who wanted to have a conversation, and there was no one to answer the phone,” he added.
However, OpenAI’s global affairs unit does not deal with some of the most fraught regulatory matters. That job goes to the legal team, which is handling issues related to British and American regulators’ review of the $18 billion alliance with Microsoft; the U.S. Securities and Exchange Commission’s investigation into whether CEO Sam Altman misled investors during his brief ouster by the board in November; and the U.S. Federal Trade Commission’s consumer protection investigation into the company.
Instead, OpenAI lobbyists are focused on spreading AI legislation. Britain, the US and Singapore are among many countries focused on governing AI and consulting closely with OpenAI and other tech companies on proposed regulations.
The company was involved in discussions around the EU’s AI law, which was passed this year, one of the most advanced pieces of legislation in regulating powerful AI models.
OpenAI was among the AI companies that argued that some of their models should not be considered “high risk” models in early drafts of the law and would therefore be subject to stricter rules, according to three people with the negotiations were involved. Despite this pressure, the company’s most capable models will be covered by the law.
OpenAI also argued against the EU’s push to examine all data provided to its base models, according to people familiar with the negotiations.
The company told the FT that pre-training data – the datasets used to give large language models a broad understanding of language or patterns – should fall outside the scope of the regulations because it was a poor way to improve the output of an AI system to understand. Instead, it suggested that the focus should be on post-training data used to refine models for a given task.
The EU has decided that regulators for high-risk AI systems can still request access to the training data to ensure it is free of errors and bias.
Since the EU law was passed, OpenAI hired Chris Lehane, who worked for President Bill Clinton, Al Gore’s presidential campaign and was Airbnb’s policy chief as vice president of public works. Lehane will work closely with Makanju and her team.
OpenAI also recently poached Jakob Kucharczyk, a former competitive leader at Meta. Sandro Gianella, head of European policy and partnerships, joined in June last year after stints at Google and Stripe, while James Hairston, head of international policy and partnerships, joined from Meta in May last year.
The company was recently involved in a series of discussions with policymakers in the US and other markets around OpenAI’s Voice Engine model, which can clone and create custom voices, which led to the company limiting its release plans after concerns about the risks of how it could be used. in the context of this year’s global elections.
The team has held workshops in countries facing elections this year, such as Mexico and India, and published guidelines on disinformation. In autocratic countries, OpenAI provides one-on-one access to its models to “trusted individuals” in areas where it believes it is not safe to release the products.
A government official who worked closely with OpenAI said another concern for the company was ensuring that any rules would be flexible in the future and would become outdated with new scientific or technological developments.
OpenAI hopes to address some of the hangovers from the social media era, which Makanju says has led to a “general distrust of Silicon Valley companies.”
“Unfortunately, people often view AI through the same lens,” she added. “We’re spending a lot of time making sure that people understand that this technology is very different, and that the regulatory interventions that make sense for it are going to be very different.”
However, some industry figures are critical of OpenAI’s lobbying expansion.
“Initially OpenAI recruited people who were deeply involved in AI policy and specialists, while now they just hire average technology lobbyists, which is a very different strategy,” says someone who has worked directly with OpenAI on creating legislation. .
“They just want to influence lawmakers in ways that Big Tech has been doing for more than a decade.”
Robinson, OpenAI’s head of planning, said the global affairs team has more ambitious goals. “The mission is safe and broadly beneficial, so what does that mean? It means creating laws that not only let us innovate and bring useful technology to people, but also put us in a world where the technology is safe.”
Additional reporting by Madhumita Murgia in London