Is OpenAI Imploding?

A few things were announced Friday afternoon (source). Followed by Greg Brockman’s sudden annoucement on twitter that he resigns.

And the Internet is abuzz with rumors about why…


Satya Nadella statement on the Microsoft blog does not read particularly encouraging (source)

1 Like

1 Like

1 Like

I guess Satya found 5 minutes to call Ilya and Mira and share some thoughts with them :wink:

This was reported earlier today (source)

The OpenAI board is in discussions with Sam Altman to return to the company as its CEO, according to multiple people familiar with the matter. One of them said Altman, who was suddenly fired by the board on Friday with no notice, is “ambivalent” about coming back and would want significant governance changes.

Altman holding talks with the company just a day after he was ousted indicates that OpenAI is in a state of free-fall without him. Hours after he was axed, Greg Brockman, OpenAI’s president and former board chairman, resigned, and the two have been talking to friends and investors about starting another company. A string of senior researchers also resigned on Friday, and people close to OpenAI say more departures are in the works.


OpenAI’s Chief Scientist Ilya Sutskever’s recent tweets are revealing:

“if you value intelligence above all other human qualities, you’re gonna have a bad time”
“Ego is the enemy of growth”
“In the future, once the robustness of our models will exceed some threshold, we will have wildly effective and dirt cheap AI therapy. Will lead to a radical improvement in people’s experience of life. One of the applications I’m most eagerly awaiting.”
“Empathy in life and business is underrated”
“Just heard Jan’s podcast on AI alignment. Such an underrated and important talk. Even an absolute beginner could understand. Happy to see AI alignment seems to be in the right hands.”
“AGI pilled-ness is a spectrum”

More about him:

It looks like a smart man got played emotionally - in an area of his weakness.


Ilya Sutskever was the person in a powerful position most likely to understand the importance of lossless compression as information criterion for model selection*. This made him extraordinarily dangerous to the established order of social pseudoscience.

“Do not expect to see animals always behaving in such a way as to maximize their own inclusive fitness. Losers in an arms race may behave in some very odd ways indeed. If they appear disoriented and unsure of their footing, this may be only the beginning.”

– Richard Dawkins, “The Extended Phenotype”

*Up until Shane Legg’s most recent interview I might have said he was. But I had been wondering, for years, why there had been no support for anything remotely like The Hutter Prize coming from anyone but Marcus Hutter (and to a tiny degree my recent bitcoin donations), since Legg was Hutter’s first PhD student. When Legg blathered the standard nonsense about “arbitrary choice of UTM” as invalidating lossless compression, I realized he’d probably succumbed to Sutskever’s fate almost as soon as he cofounded DeepMind about a decade ago. Sociology remains almost entirely a pseudoscience not just because of bad actors, but because potentially good actors seem to be preferentially neutralized as soon as they get close to disciplining it. The structure of this control system would be a top priority of research in true sociology.




AI safety, understood to be the absence of “bad outcomes”, is intrinsically hard to measure or to have an intuition about. In contrast, OpenAI’s commercial endeavors fall on a trajectory of success that is well understood relative to other high tech successful startups in the last 20 years.

That along with the influence companies like Microsoft have in shaping the “industry narrative” intrinsically tilt the balance towards Altman’s side.


The Financial Times has a front page story today regarding the effort to squeeze the toothpaste back in the tube (paywalled source)

Top OpenAI executives have swung behind Sam Altman, as a groundswell of support from employees and investors raised pressure on board members to reinstate the chief executive of the company behind ChatGPT who they sacked on Friday.

Venture backers and executives at Microsoft — which has committed more than $10bn to OpenAI and is expected to have a key role in negotiations — were exploring options this weekend including clearing out the board and reinstating Altman, according to three people briefed on the discussions.

In a memo circulated to staff of the generative artificial intelligence company late on Saturday night, chief strategy officer Jason Kwon said he was “optimistic” that Altman and co-founder Greg Brockman — who quit on Friday — would return, according to a person with direct knowledge of its contents. The memo’s existence was first reported by The Information.

The article also reports on a theory that links Altman’s ousting with his efforts to raise money from the Middle East and Masa-san in an attempt to jumpstart a device ecosystem that might compete with Nvidia and TSMC. Which quite frankly makes no sense whatsoever, but whatever…

It’s starting to look more and more like a B-movie plot.


Well, yes, it is encouraging that Musk came out and flat out said “AI is compression”. People have been catching on to the long-understood idea, taking various forms throughout the ages, that compression is closely related to intelligence. But they almost always conflate lossy compression (Ockahm’s Chainsaw Massacre) with lossless compresion (Ockham’s Razor) since “everyone knows what noise looks like”. What they ignore is that signal can also “look like” noise.

I’d have been more encouraged if Musk had said “lossless”, as did Sutskever implicitly by referring to Kolmogorov Complexity in this Simons Institute talk:

But there are various off-ramps that people take beyond lossy vs lossless, not the least of which is the conflation of induction (model creation which is lossless compression of the data in evidence) with deduction (model application during prediction which is extrapolation from the data in evidence via decompression of the model to provide algorithmic probability distributions). And on top of that, they often conflate data selection, model creation, model selection and model application – all of which is necessary for “intelligence”. All I’ve been trying to do is get people to understand that their various loss functions need to be compared against the gold standard loss function: lossless compression.



Bloomberg reports the interim CEO is looking to rehire Altman and Brockman (paywalled source).

OpenAI’s interim Chief Executive Officer Mira Murati plans to re-hire her ousted predecessor Sam Altman and former President Greg Brockman in a capacity that has yet to be finalized, according to people with direct knowledge of the matter.

Murati, who was installed on Friday after the board fired Altman, is negotiating with Adam D’Angelo, the CEO of Quora Inc., who is acting as a representative of the OpenAI board, said the people, who asked not to be identified because the negotiations are private and remain fluid. Brockman, who was also a board member, quit in protest hours after Altman’s departure.

Given Quora built Poe (source), which arguably competes with ChatGPT, does D’Angelo have a big conflict of interest in his OpenAI board role? At launch, it had been reported Poe is built using models from OpenAI and Anthropic.


The Information reported a few minutes ago that discussions to bring back Altman have broken down and the new OpenAI interim CEO is Emmett Shear, the current CEO of Twitch (paywalled source)

Sam Altman won’t return as CEO of OpenAI, despite efforts from the company’s executives to bring him back, co-founder and board director Ilya Sutskever told staff Sunday night. After a weekend of negotiations with the board of directors that fired him Friday, as well as with its remaining leaders and top investors, Altman will not return to the startup he co-founded in 2015, this person said. Emmett Shear, co-founder of video streaming site Twitch, will take over as interim CEO, Sutskever said.

The decision could deepen a crisis precipitated by the board’s sudden ouster of Altman and its removal of President Greg Brockman from the board Friday. Brockman resigned later that day, followed by three senior researchers, threatening to set off a broader wave of departures to OpenAI’s rivals, including Google, and to a new venture Altman has been plotting in the wake of his firing.

Interestingly, this is attributed to Ilya Sutskever. So, is OpenAI imploding?

Next breaking news will undoubtedly have Elon Musk clear his agenda and somehow find it within him to take another turn at the wheel with OpenAI :wink:

Un-paywalled Atlantic article on the “chaos at OpenAI”

The more confident Sutskever grew about the power of OpenAI’s technology, the more he also allied himself with the existential-risk faction within the company. For a leadership offsite this year, according to two people familiar with the event, Sutskever commissioned a wooden effigy from a local artist that was intended to represent an “unaligned” AI—that is, one that does not meet a human’s objectives. He set it on fire to symbolize OpenAI’s commitment to its founding principles. In July, OpenAI announced the creation of a so-called superalignment team with Sutskever co-leading the research. OpenAI would expand the alignment team’s research to develop more upstream AI-safety techniques with a dedicated 20 percent of the company’s existing computer chips, in preparation for the possibility of AGI arriving in this decade, the company said.


Emmett Shear does not leave much to the imagination regarding which side of the AI risk debate he is on. Hint: Eliezer will be happy.

Meanwhile, Satya is looking for that damn bottle of WhiteOut to cover Mira’s name in Friday’s 2 sentence blogpost. Here is one simple trick, leave the name blank


But… also, oh no. Please not there.


After a week-end that seemed like an episode of a soap opera, say, Hex and the City or Desperate Programmers, OpenAI have announced that Emmett Shear, former CEO of Twitch, a video streaming service acquired by Amazon in 2014, has become, depending upon which report you read, either interim CEO or permanent CEO. In a Bloomberg report, published at 11:57 UTC on 2023-11-20, they’re going with permanent. Here’s the report, courtesy of Yahoo! to evade the paywall.

Instead, OpenAI’s board replaced Altman with Emmett Shear, the former CEO of Twitch, in a stinging rebuke to its investors. Nadella then announced that he had recruited Altman and Greg Brockman, another OpenAI co-founder who had quit in protest, to lead a newly formed in-house AI research unit at Microsoft, casting a shadow over the future of its prized investment.

Now OpenAI, which was in talks with investors about a $86 billion valuation, risks an exodus of employees to Microsoft and other rivals, and an end to its stunning growth. And the startup will face close scrutiny over its ability, or willingness, to turn its cutting-edge AI into profits.

Altman’s replacement, Shear, stepped down as CEO of Inc.’s game-streaming site Twitch earlier this year. He won over directors at OpenAI because of his past recognition of the existential threats that AI presented, a person familiar with the matter said.

The fallout from the board’s decision to defy investors is likely to be widespread. Thrive Capital had been expected to lead an offer for employee shares, a deal that would value OpenAI at $86 billion. As of this weekend, the firm had not yet wired the money and it told OpenAI that Altman’s departure will affect its actions.

Like some members of OpenAI’s board, Shear has ties to the sometimes controversial effective altruism movement, which sees serious risks from advanced AI. Many effective altruists — a pseudo-philosophical movement that seeks to donate money to head off existential risks — have imagined scenarios in which a powerful AI system could wreak widespread harm.

Ahh, here comes “Effective Altruism” (EA) again. They appear to have won this clash of EA versus e/acc.

This Emmett Shear appears to be a piece of work. Here are are some random snippets collected by people across 𝕏.




“[A]ctual, literal Nazis take over…”.

This is a 23 minute interview with Shear from June, 2023 on his reaction to Artificial Intelligence doom theories.

Microsoft CEO Satya (“Blue Screen of Apu”) Nadella says all is well between Microsoft and OpenAI.


Oh, and we’re hiring Sam Altman, fired OpenAI CEO, and Greg Brockman, deposed OpenAI board chairman who quit later that day, to “lead a new advanced AI research team.”.

On 2023-11-18, SEMAFOR, founded by Ben Smith, former editor of BuzzFeed News, and whose original funding came in large part from Sam Bankman-Fried, reported:

So, if this is to be believed, a large part of Microsoft’s US$10 billion “investment” in OpenAI was not cash but actually credits for use of computing services on Microsoft’s Azure cloud service, presumably for training OpenAI models and running their services. This is, presumably, a spigot which can be turned off as easily as turned on.

Given that the popular culture nightmare scenario of an AI apocalypse is the “paperclip maximiser”, does anybody else see the exquisite irony of the two people most seen as promoting the emergence of superhuman AI just went to work for the company whose first foray into mass-market “artificial intelligence” was Clippy?


Will there be a mass migration of talented developers from OpenAI to the new venture at Microsoft to escape their new EA overlords and the dead hand of declerationism? Well, they can’t be too happy with what happened to their financial prospects since last Friday morning.



I was hoping that Open AI would appoint Skynet as CEO.


Now Ilya finally realizes it:
Screenshot 2023-11-20 at 9.39.08 AM

The banger:

The rumor is that the board coup was driven by folks affiliated with this: