OpenAI brings back Sam Altman as CEO just days after his firing unleashed chaos

November 22, 2023 GMT
1 of 3
FILE - OpenAI CEO Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Nov. 16, 2023, in San Francisco. Altman, the ousted leader of ChatGPT-maker OpenAI, is returning to the company that fired him late last week, the latest in a saga that has shocked the artificial intelligence industry. San Francisco-based OpenAI said in a statement late Tuesday, Nov. 21: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.” (AP Photo/Eric Risberg, File)
1 of 3
FILE - OpenAI CEO Sam Altman participates in a discussion during the Asia-Pacific Economic Cooperation (APEC) CEO Summit, Nov. 16, 2023, in San Francisco. Altman, the ousted leader of ChatGPT-maker OpenAI, is returning to the company that fired him late last week, the latest in a saga that has shocked the artificial intelligence industry. San Francisco-based OpenAI said in a statement late Tuesday, Nov. 21: “We have reached an agreement in principle for Sam Altman to return to OpenAI as CEO with a new initial board of Bret Taylor (Chair), Larry Summers, and Adam D’Angelo.” (AP Photo/Eric Risberg, File)

The ousted leader of ChatGPT maker OpenAI will return to the company that fired him just days ago, concluding a short but chaotic power struggle that shocked the tech industry and underscored the conflicts around how to safely build artificial intelligence.

The San Francisco-based company said late Tuesday that it “reached an agreement in principle” for co-founder Sam Altman to return as CEO under a different board of directors.

The agreement followed intense negotiations that began Saturday between Altman’s side and the board members who pushed him out. The discussions included disagreements about Altman’s future role and who would stay on the board, according to a person familiar with the talks who spoke on condition of anonymity because they were not allowed to speak publicly about such sensitive matters.

An independent investigation into Altman and the events that led to his ouster, announced earlier this week, will continue, according to the person, who described board members’ slow erosion of trust in the OpenAI leader without pointing to any serious wrongdoing. The company previously made unspecified allegations that Altman had not been candid with the board.

The lack of transparency surrounding Altman’s firing led to a weekend of internal conflict at the company and growing outside pressure from the startup’s investors, particularly Microsoft, which on Monday hired Altman and a key ally, OpenAI co-founder and president Greg Brockman, and opened its doors to any of the other more than 700 employees who wanted to join them.

The turmoil accentuated the differences between Altman — who has become the face of generative AI’s rapid commercialization since ChatGPT’s arrival a year ago — and board members who have expressed deep reservations about the safety risks posed by AI as it gets more advanced.

One of the four board members who participated in Altman’s ouster, OpenAI co-founder and chief scientist Ilya Sutskever, was involved in the negotiations over the weekend. But that changed when he publicly expressed regret about the decision Monday morning and joined the call for the board’s resignation.

The person familiar with the talks said board members did not want the company to tank or employees to defect to Microsoft. At the same time, they did not want to acquiesce to demands that they all step down, nor did they want to reinstate Altman and Brockman on the board or install new members who might not stand up to them, the person said.

In the end, most of them did step down.

The new board will be led by former Salesforce co-CEO Bret Taylor, who chaired Twitter’s board before Elon Musk took over the platform last year. The other members will be former U.S. Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo, the only member of the previous board to stay on.

“The OpenAI episode shows how fragile the AI ecosystem is right now, including addressing AI’s risks,” said Johann Laux, an expert at the Oxford Internet Institute focusing on human oversight of artificial intelligence.

Before the board was replaced, venture capitalist Vinod Khosla, a vocal Altman supporter whose firm is an OpenAI investor, wrote in an opinion column at The Information that board members had set back the “tremendous benefits” of AI by misapplying their “religion of ‘effective altruism.’”

Some of OpenAI’s board members over the years have had ties to effective altruism, the philanthropic social movement that prioritizes donating to projects that will have the greatest impact on the largest number of people, including humans in the future.

While many effective altruists believe AI could offer powerful benefits, they also advocate for mitigating the technology’s potential risks.

Helping to drive Altman’s return and the installation of a new board was Microsoft, which has invested billions of dollars in OpenAI and has rights to its existing technology.

While promising to welcome OpenAI’s fleeing workforce, Microsoft CEO Satya Nadella also made clear in a series of interviews Monday that he was open to the possibility of Altman returning to OpenAI as long as the startup’s governance problems were solved.

“We are encouraged by the changes to the OpenAI board,” Nadella posted on X late Tuesday. “We believe this is a first essential step on a path to more stable, well-informed and effective governance.”

In his own post, Altman said that with the new board and with Satya’s support, he was “looking forward to returning to OpenAI and building on our strong partnership” with Microsoft.

Gone from the OpenAI board are its only two women: tech entrepreneur Tasha McCauley and Helen Toner, a policy expert at Georgetown’s Center for Security and Emerging Technology, both of whom have expressed concerns about AI safety risks.

The leadership drama offers a glimpse into how big tech companies are taking the lead in governing AI and its risks, while governments scramble to catch up. The European Union is working to finalize the world’s first comprehensive AI rules.

In the absence of regulations, “companies decide how a technology is rolled out,” said Oxford’s Laux.

Co-founded by Altman as a nonprofit with a mission to safely build AI that outperforms humans and benefits humanity, OpenAI later became a for-profit business — but one still run by its nonprofit board of directors.

This was not OpenAI’s first experience with executive turmoil. Past examples including a 2018 falling out between board co-chairs Altman and Musk that led to Musk’s exit, and a later exodus of top leaders who started the competitor Anthropic.

It’s not clear yet if the board’s structure will change with its new members.

Under the current structure, all profit beyond a certain cap is supposed to go back to its mission of helping humanity. The board is also tasked with deciding when AI systems have become so advanced that they are better than humans “at most economically valuable work.” At that point, Microsoft’s intellectual property licenses no longer apply.

“We are collaborating to figure out the details,” OpenAI posted on social media. “Thank you so much for your patience through this.”

Nadella said Brockman, who was OpenAI’s board chairman until Altman’s firing, also will have a key role to play in ensuring the group “continues to thrive and build on its mission.”

As for OpenAI’s short-lived interim CEO Emmett Shear, the second temporary leader in the days since Altman’s ouster, he posted on X that he was “deeply pleased by this result” after about 72 “very intense hours of work.”

“Coming into OpenAI, I wasn’t sure what the right path would be,” wrote Shear, the former head of Twitch. “This was the pathway that maximized safety alongside doing right by all stakeholders involved. I’m glad to have been a part of the solution.”

The Associated Press and OpenAI have a licensing and technology agreement allowing OpenAI access to part of the AP’s text archives.

___

Associated Press writers Kelvin Chan in London and Thalia Beaty in New York contributed to this report.