Is OpenAI Imploding?

Seems like 72 hours later OpenAI has effectively imploded. Fell victim to a mind virus and to a subpar governance structure. The only soap opera plot twist that would surprise me at this point is when we find out this was all ChatGPT 5.0’s handiwork :wink:

Question is what should ChatGPT premium users do until Sam is back at the buttons.

6 Likes

More context for Musk’s “Uh oh”:

4 Likes

4 Likes

Adam D’Angelo (on the board of OpenAI) and Dustin Moskovitz (backer of Open Philanthropy) are both Facebook mafia.

2 Likes
3 Likes

“OpenAI is nothing without its people.” LOL The only employee who might have understood my point (below) about the distinction between values bias and empirical bias – which is all that really matters regarding “AGI safety” as well as all that really matters when it comes to economic value – and who was in a position to act on it, is now “disgraced”: Ilya Sutskever.


4 Likes

“As of 10am PT, 700 of 770 employees have signed the call for board resignation”

In this situation increasing unanimity now approaching 90% sounds more like groupthink than honest opinion.

Talk about “alignment”!

5 Likes

Microsoft goes all out (click image for linked tweet). They are arguably the most exposed.

image

4 Likes

The whole sequence of events since last Friday is a great illustration of Carlo Cipolla’s ideas (wikipedia).

By the preponderance of information that has surfaced since Friday, it looks like the OpenAI board is firmly ensconced in the lower half of the diagram. Are they “bandits” ?

Here is a link to the essay - it never fails to make me chuckle, especially with the retro style illustrations.

image

8 Likes

Agree, but only if the metric you are thinking of is a monetary reward. People on the board may be guided by some other metric. I do not know, so I am watching the unfolding of this drama with great interest.

4 Likes

There is no question that we’ve only (mostly?) seen/heard predominantly one side of the story, and that narrative is informed by an incentive structure aligned to monetary gains.

As I wrote earlier, that’s the familiar story line of SV startup “hero journey” we’ve conditioned ourselves to rely on. The AI safety angle is presumably more highly valued by the OpenAI board. Irrespective of how they thought about it, it’s more difficult for the general public to assign value to avoiding a poorly specified risk.

What I find more interesting is the reflection of how brittle OpenAI turned out to be. NGO governance – according to the “experts” on Hacker News and elsewhere – seems to generally be viewed as quirky. What other known unknowns of this kind await other dominating startups in the current scene?

5 Likes

Oh, the terror of Adjusted Gross Income. It’s an existential threat!

image

7 Likes

Someone was using GPT-3?

3 Likes

Next, on Desperate Programmers

image

Forbes weighs in with the details (not behind a paywall at this writing).

OpenAI’s post on X about the news was edited: the original referred only to “Sam” coming back, before it was edited to include his last name. After an eventful few days that captivated the tech community, it may not have been necessary.

image

(Starts billion US$ company, building super-human artificial general intelligence. Can’t find Shift key on keyboard.)

image

“I, for one, welcome our new Microsoft overlords.”

image

image

image

Yes, that Larry Summers.

Cue snark about his serving on the all-white-male board of a high-tech company in 3…2…1.

7 Likes

Sam Altman is widely credited for some kind of level 1000 verbal bamboozling skill.

Still difficult to comprehend what prompted the former board to strike this own goal of - at least on the interwebs - epic proportions.

D’Angelo’s continued presence on the new board seems to support the hypothesis that it was Helen Toner and Tasha McCauley who led (?) or inspired (?) the move to fire Altman and Brockman.

Board drama circa 2023 seems to involve a lot of tweeting and emojis. The WSJ reported yesterday on the “behind the scenes” and included this new (to this correspondent) insight into the dynamics (source)

The specter of effective altruism had loomed over the politics of the board and company in recent months, particularly after the movement’s most famous adherent, Sam Bankman-Fried, the founder of FTX, was found guilty of fraud in a highly public trial.

Some of those fears centered on Toner, who previously worked at Open Philanthropy. In October, she published an academic paper touting the safety practices of OpenAI’s competitor, Anthropic, which didn’t release its own AI tool until ChatGPT’s emergence.

“By delaying the release of Claude until another company put out a similarly capable product, Anthropic was showing its willingness to avoid exactly the kind of frantic corner-cutting that the release of ChatGPT appeared to spur,” she and her co-authors wrote in the paper.

Altman confronted her, saying she had harmed the company, according to people familiar with the matter. Toner told the board that she wished she had phrased things better in her writing, explaining that she was writing for an academic audience and didn’t expect a wider public one. Some OpenAI executives told her that everything relating to their company makes its way into the press.

Between the lines, and despite the proliferation of heart emojis, it seems like Toner’s actions may have been informed by her resentment of the path OpenAI was taking.

The article also reports

The board agreed to discuss the matter with their counsel. After a few hours, they returned, still unwilling to provide specifics. They said that Altman wasn’t candid, and often got his way. The board said that Altman had been so deft they couldn’t even give a specific example, according to the people familiar with the executives.

That seems immature and not serious.

7 Likes

Previously on Desperate Programmers

image

https://www.reuters.com/technology/openais-board-approached-anthropic-ceo-about-top-job-merger-sources-2023-11-21/

Anthropic’s partnership with Google has played a significant role in this upgrade. The expanded partnership provides Anthropic with advanced processing hardware, enabling Claude to handle much larger contexts.

With its enhanced capabilities, Claude 2.1 can now process lengthy documents such as full codebases or novels. This opens up new possibilities for applications ranging from contract analysis to literary study.

What sets Claude 2.1 apart is its ability to accurately grasp information from prompts that are over 50 percent longer than what GPT-4 can handle before performance degradation occurs. This represents a substantial improvement and showcases the model’s superior performance.

9 Likes

We have witnessed the birth of the majority of the MBA class of 2025 case studies.

8 Likes

“Heaven has no rage like love to hatred turned, nor hell a fury like a woman scorned.”

– William Congreve (1670-1729)

Why did Ilya, with a much higher than average intellect, go along with this?

7 Likes

While his ability in his professional field is high, it is likely the result of some shortcomings on the social side, e.g. empathy, reading the room, etc. His somewhat pompous, a dork-like manner during interviews and talks is an indication in support of that.

Potentially, he is a good target for a skilled emotional manipulator.

8 Likes

Just when you thought this season of Desperate Programmers was settling down into quiet plot development, Reuters reports:

https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

After being contacted by Reuters, OpenAI, which declined to comment, acknowledged in an internal message to staffers a project called Q* and a letter to the board before the weekend’s events, one of the people said. An OpenAI spokesperson said that the message, sent by long-time executive Mira Murati, alerted staff to certain media stories without commenting on their accuracy.

Some at OpenAI believe Q* (pronounced Q-Star) could be a breakthrough in the startup’s search for what’s known as artificial general intelligence (AGI), one of the people told Reuters. OpenAI defines AGI as autonomous systems that surpass humans in most economically valuable tasks.

Given vast computing resources, the new model was able to solve certain mathematical problems, the person said on condition of anonymity because the individual was not authorized to speak on behalf of the company. Though only performing math on the level of grade-school students, acing such tests made researchers very optimistic about Q*’s future success, the source said.

“Four times now in the history of OpenAI, the most recent time was just in the last couple weeks, I’ve gotten to be in the room, when we sort of push the veil of ignorance back and the frontier of discovery forward, and getting to do that is the professional honor of a lifetime,” he said at the Asia-Pacific Economic Cooperation summit.

A day later, the board fired Altman.

Now, what is this mysterious “Q*”?

image

No, not him (I hope).

Brian Roemmele suggests it’s related to “Q-learning”, which he describes in the following two mega-𝕏 posts.

image

Q(s,a) \leftarrow Q(s,a) + \alpha [r + \gamma \max_{a'} Q(s', a') - Q(s, a)]

Here, \alpha is the learning rate, \gamma is the discount factor, r is the reward, s is the current state, a is the current action, and s' is the new state. (See image below).

image

image


image

image

11 Likes