This was just announced by the OpenAI Twitter account:

Implicitly, the previous board members associated with EA, Helen Toner and Tasha McCauley, are ("in principle") no longer going to be part of the board. 

I think it would be useful to have, in the future, a postmortem of what happened, from an EA perspective. EA had two members on the board of arguably the most important company of the century, and it has just lost them after several days of embarrassment. I think it would be useful for the community if we could get a better idea of what led to this sequence of events.

[update: Larry Summers said in 2017 that he likes EA.]

55

5
1

Reactions

5
1
Comments37
Sorted by Click to highlight new comments since: Today at 3:37 PM
Habryka
5mo73
36
27
4

EAs out of the board

Let's please try to avoid phrasing things in ways as tribal as this. We have no idea what happened. Putting an identity of the board members this central feels like it frames the discussion in a bad way.

JWS
5mo53
22
8

I mean it is literally accurate, the two EAs previously on the board are gone.[1] It's part of the story, though not all of it. Honestly @Fermi–Dirac Distribution if I were you I'd consider changing it back, but obviously your call.

These individuals in question are also on the boards of important EA organisations too. And that feels clearly germane to the Forum.  Their board performance on this issue surely has implications for their stewardship of these organisations and by proxy their influence on EA as a whole

We don't have the full picture of what happened. But some things are becoming clearer. And it's very difficult to separate 'waiting for information' from 'pleading no-contest in the court of public opinion' which is what the board seem to have done

  1. ^

    In principal, given this story who knows if that'll last

I think the general point is that this makes sense from a charitable perspective, but is open to a fair degree of uncharitable impressions as well. When you say "EAs are out" it seems like we want some of our own on the inside, as opposed to just sensible, saftey concerned people.

It kind of implies EAs are uniquely able to conduct the sort of saftey conscious work we want, when really (I think) as a community what we care about is having anyone on there who can serve as a ready tonic to for-profit pressures.

What succinct way to put this is better? "Saftey is out" feels slightly better but like it's still making some sort of claim that we have unique providence here. So idk, maybe we just need slightly longer expressions here like "Helen Toner and Tasha McCauley have done really great work and without their concern for saftey we're worried about future directions of the board" or something like that.

(the other two paragraphs of yours focus somewhat confusingly on the idea of labeling EAs as being necessary for considering the impact of this on EA (and on their ability to govern in EA) which I think is best discussed as its own separate point?)

I'm find myself pretty confused at this reply Tristan. I'm not trying to be rude, but like in some cases I don't really see how it follows

When you say "EAs are out" it seems like we want some of our own on the inside, as opposed to just sensible, saftey concerned people.

I disagree. I think it's a statement of fact. The EAs who were on the board will no longer be on the board. They're both senior EAs, so I don't think it's an irrelevant detail for the Forum to consider. I also think it's a pretty big stretch to go from 'EAs are out' to 'only EAs can be trusted with AI Safety', like I just don't see that link being strong at all, and I disagree with it anyway

What succinct way to put this is better? "Saftey is out" feels slightly better but like it's still making some sort of claim that we have unique providence here. So idk, maybe we just need slightly longer expressions here like "Helen Toner and Tasha McCauley have done really great work and without their concern for saftey we're worried about future directions of the board" or something like that.

Perhaps an alternative could have been "Sam Altman returning as OpenAI CEO, major changes to board structure agreed?" or something like that?

As for your expression. I guess I just disagree with it, or think it's lacking evidence? I definitely wouldn't want to cosign it or state that's an EA-wide position?

to avoid uncertainty about what I mean here:

  • I'm familiar with Toner's work a little, and it looks good to me. I have basically ~0 knowledge of what 'great work' McCauley has done, or how she ended up on the board, or her positions on AI Safety or EA in general
  • I don't think not having these members of the board means I should be worried about the future of the OpenAI board or the future of AI Safety
  • In terms of their actions as board members the drastic action they took on Friday without any notice to investors or other stakeholders, combined with losing Ilya, nearly losing the trust of their newly appointed CEO, complete radio silence, and losing the support of ~95% of the employees of the entity that they were board members for,[1] leaves me lots of doubts about their performance and suitability of board members of any significant organisation and their ability to handle crises of this magnitude

But see below, I think these issue are best discussed somewhere else

(the other two paragraphs of yours focus somewhat confusingly on the idea of labeling EAs as being necessary for considering the impact of this on EA (and on their ability to govern in EA) which I think is best discussed as its own separate point?)

I agree that the implications of this for EA governance are best discussed in another place/post entirely, but it's an issue I think does need to be brought up, perhaps when the dust has settled a bit and tempers on all sides have cooled.

I don't know where I claim that labelling EAs is necessary for discussing the impacts of this at all. Like I really just don't get it - I don't think that's true about what I said and I don't think I said it or implied it 🤷‍♂

  1. ^

    including most of the AI-safety identifying people at OpenAI as far as I can tell

I tried to explain why you may not want to put it that way, i.e. that there's perhaps an issue of framing here, and you first reply "but the statement is true" and essentially miss the point.

I'll briefly respond to one other point, but then want to reframe this because the confusion here seems unproductive to me (I'm not sure where our views differ and I don't think our responses are helping to clarify that for one another). The original comment was expressing a view like "using the phrase 'EAs are out' is probably a bad way to frame this". You responded "but it's literally true" and then went on to talk about how disusing this seems important for EA. But no one's implying it's not important for us to discuss? The argument is not "let's not talk about their relations to EA" it's a framing thing, so I think you're either mistaken on what the claim is here, or you just wrote this in a somewhat confusing manner where you started talking about something new and unrelated to the original point in your second paragraph.

To reframe: I'd perhaps want you to think on a question: what does it mean for us to be concerned that EAs are no longer on the board? Untangling why we care, and how we can best represent that, was the goal of my comment. To this end, I found the bits where you expand on your opinions on Toner and the board generally to be helpful.

You could just say ‘Helen Toner and Tasha McCauley are out’.

Larry Summers has said he agrees with at least some EA arguments. So the accuracy of ‘EAs are out’ is ambiguous, except in the sense of ‘people who have an explicit identification with the EA community’. Which seems tribal.

I changed the title for that reason, but it seems that people disagreed with my decision/reasoning

Oh interesting haha

Current reporting is that 'EAs out of the board' (starting with expelling Toner for 'criticizing' OA) was the explicit description/goal told to Sutskever shortly before, with reasons like being to avoid 'being painted in the press as “a bunch of effective altruists,” as one of them put it'.

Unclear whether this makes it better or worse to be endorsing that framing

How does it frame the discussion in a bad way? I thought this was a succinct way to describe an aspect of OpenAI's announcement that is very relevant to the EA Forum.

I changed the title because I learned that Summers likes EA

It's a disappointing outcome - it currently seems that OpenAI is no more tied to its nonprofit goals than before. A wedge has been driven between the AI safety community and OpenAI staff, and to an extent, Silicon Valley generally.

But in this fiasco, we at least were the good guys! The OpenAI CEO shouldn't control its nonprofit board, or compromise the independence of its members, who were doing broadly the right thing by trying to do research and perform oversight. We have much to learn.

Hey Ryan :)

I definitely agree that this situation is disappointing, that there is a wedge between the AI safety community and Silicon Valley mainstream, and that we have much to learn.

However, I would push back on the phrasing “we are at least the good guys” for several reasons. Apologies if this seems nit picky or uncharitable 😅 just caught my attention and I hoped to start a dialogue

  1. The statement suggests we have a much clearer picture of the situation and factors at play than I believe anyone currently has (as of 22 Nov 2023)
  2. The “we” phrasing seems to suggest that the members of the board in question are representative of EA as a group a. I don’t believe their views or actions are well enough known to assess how in line they are with general EA sentiment b. I don’t think the EA community has a strong consensus on the issues of this particular case
  3. Many people, in good faith and with substantive arguments, come to the opposite conclusion and see the actions of the board as having been “bad”, and are highly critical of the potential influence EA may have had in the situation
  4. Flattening the situation to “good guys” and “bad guys” seems to be a bit of a rhetorical trap that is risky from an epistemological perspective. (I’m sure you have much more nuanced underlying reasons and pieces of evidence, and I completely respect using a rhetorical shortcut to simplify a point - apologies for the pedantry!)

Maybe on a more interesting note, I actually interpret this case quite differently and think that the board made a serious mistake and come out of this as the “less favorable” party. I’d love to discuss in more depth about your reasons for seeing their actions positively and would be happy to share more about why I see them negatively if you’re interested 😊

2 - I'm thinking more of the "community of people concerned about AI safety" than EA.

1,3,4- I agree there's uncertainty, disagreement and nuance, but I think if NYT's (summarised) or Nathan's version of events is correct (and they do seem to me to make more sense to me than other existing accounts) then the board look somewhat like "good guys", albeit ones that overplayed their hand, whereas Sam looks somewhat "bad", and I'd bet that over time, more reasonable people will come around to such a view.

2- makes sense!

1,3,4- Thanks for sharing (the NYT summary isn’t working for me unfortunately) but I see your reasoning here that the intention and/or direction of the attempted ouster may have been “good”.

However, I believe the actions themselves represent a very poor approach to governance and demonstrate a very narrow focus that clearly didn’t appropriately consider many of the key stakeholders involved. Even assuming the best intentions, in my perspective, when a person has been placed on the board of such a consequential organization and is explicitly tasked with helping to ensure effective governance, the degree to which this situation was handled poorly is enough for me to come away believing that the “bad” of their approach outweighs the potential “good” of their intentions.

Unfortunately it seems likely that this entire situation will wind up having a back-fire effect from what was (we assume) intended by creating a significant amount of negative publicity for and sentiment towards the AI safety community (and EA). At the very least, there is now a new (all male 🤔 but that’s a whole other thread to expand upon) board with members that seem much less likely to be concerned about safety. And now Sam and the less cautious cohort within the company seem to have a significant amount of momentum and good will behind them internally which could embolden them along less cautious paths.

To bring it back to the “good guy bad guy” framing. Maybe I could buy that the board members were “good guys” as concerned humans, but “bad guys” as board members.

I’m sure there are many people on this forum who could define my attempted points much more clearly in specific philosophical terms 😅 but I hope the general ideas came through coherently enough to add some value to the thread.

Would love to hear your thoughts and any counter points or alternative perspectives!

Good points in the second paragraph. While it's common in both nonprofits and for-profits to have executives on the board, it seems like a really bad idea here. Anyone with a massive financial interest in the for-profit taking an aggressive approach should not be on the non-profit's board. 

New York Times suggesting a more nuanced picture: https://archive.li/lrLzK

Altman was critical of Toner's recent paper, discussed outing her, and wanted to expand the board. The board disagreed on which people to add, leading to a stalemate. Ilya suddenly changed position, and the board took abrupt action.

They don't offer an explanation what the 'dishonesty' would've been about.

This is the paper in question, which I think will be getting a lot of attention now: https://cset.georgetown.edu/publication/decoding-intentions/

How can policymakers credibly reveal and assess intentions in the field of artificial intelligence? AI technologies are evolving rapidly and enable a wide range of civilian and military applications. Private sector companies lead much of the innovation in AI, but their motivations and incentives may diverge from those of the state in which they are headquartered. As governments and companies compete to deploy evermore capable systems, the risks of miscalculation and inadvertent escalation will grow. Understanding the full complement of policy tools to prevent misperceptions and communicate clearly is essential for the safe and responsible development of these systems at a time of intensifying geopolitical competition.

In this brief, we explore a crucial policy lever that has not received much attention in the public debate: costly signals.

[anonymous]5mo18
0
0

"A person with direct knowledge of the negotiations says that the sole job of this small, initial board is to vet and appoint a new formal board of up to 9 people that will reset the governance of OpenAI. Microsoft, which has invested over $10 billion into OpenAI, will likely have a seat on that expanded board, as will Altman himself."

https://www.theverge.com/2023/11/22/23967223/sam-altman-returns-ceo-open-ai

[anonymous]5mo8
0
0

"As part of a compromise deal to return to OpenAI, neither Altman nor former OpenAI President Greg Brockman, who also departed Friday, will reclaim their seats on the company's board, this person said."

https://www.theinformation.com/articles/breaking-sam-altman-to-return-as-openai-ceo

How long will that last though?

It is also the two women board members are also now off the board. So I would also like to hear what happened from a woman's perspective. Was it another case of powerful men not wanting to cede authority to women who are occupying positions of authority?

There could be many layers to what happened.

I am amazed that the EA community has such a negative reaction to someone pointing out the possibilities of institutional/AI-leadership sexism.

Within minutes my comment got -14 karma points. Interesting!

I agree that the first sentence of your original comment is an interesting observation and that there might be an interesting thought here in how this situation interacted with gender dynamics.

I don't like the rest of your comment though, since it seems to reduce the role of the female board members to their gender and is a bit suggestive in a way that doesn't actually seem helpful to understand the situation.

We should probably reserve judgement until the final board members are announced.

That being said, I agree: this is one of the top AI organisations in the world. How much the new board reflects the values of humanity in general, rather than a tiny slice of tech guys, might have serious consequences on the future. 

Thank you for clarifying the voting system for me. So my comment most likely irritated some folks with lots of karma. 

I certainly don't want to say things that irritate folks in the EA community . I was giving voice to what I might hear from some of my women friends, something like: "Yes Helen Toner was an EA, but she was a woman who was questioning what Altman was doing too."  According to this article, Altman tried to push out Helen Toner "because he thought a research paper she had co-written was critical of the company." But she was on a board whose responsibility to make sure that AI serves humanity. So here job was in some sense to be critical of the company when it might be diverging from the mission of serving humanity. So when she tries to do her job, some founder-guy tries to her because the public discussion about the issue might be critical of something he implemented?

I think this information indicates that there is not only an EA/non-EA dimension to that precursor event, but I think most women would recognize that there is also a gender/power/authority dimension to that precursor event. 

In spite of such considerations, I also agree with the idea that we should not focus on differences, conflict and divisions. And now I will more fully understand the karma cost of irritating someone who has much more karma than me on the forum. 

Thank you for the feedback on my comment. It has been informative. 

It's common enough that the initial net karma doesn't resemble the long term net karma, I wouldn't read too much into it :)

So my comment most likely irritated some folks with lots of karma.

I don't know if this is true. Fwiw, I upvoted your comment pretty early on when it was double-digits negative, but I didn't strong-upvote because I almost never strong-upvote (low-effort) comments. 

I think this information indicates that there is not only an EA/non-EA dimension to that precursor event, but I think most women would recognize that there is also a gender/power/authority dimension to that precursor event. 

Yeah I think that's element is definitely there. It might not be big however. My own guess is that both EA/non-EA and gender dynamics are relatively small for the precursor event, compared to just "Yes-man to Sam" vs "doesn't buy his aura and is sometimes willing to disagree with Sam." Maybe gender dynamics or EA dynamics exacerbated it; eg, Sam would be more willing to respect billionaire male tech CEOs on the board than uppity women or weird social-movement people. But this is just speculation.

The forum has a thing where people with more karma have more upvote/downvote power (at least this was a thing last year. I presume it still is).

This means that even though you got -14 in minutes, that might just be 2 people downvoting in total. 

Worth keeping in mind.

Someone else feel free to point out I am mistaken if I am indeed mistaken.

The vote system is explained here. Theoretically a strong upvote from a power user could reach +16 votes, although I think the maximum anyone's gotten to is +10. 

I think the system is kinda weird (although it benefits me), but it's better now that the agreevotes are counted equally. 

Mousing over the original comment, it currently has 69 votes which has somehow managed to average to a karma of 1. Seems to have split the crowd exactly evenly. 

Note that the agreement is 'in principle'. The board hasn't yet given up its formal power (?)

I think a lot will still depend on the details of the negotiation: who will be on the new board and how safety-conscious will they be? The 4-person board had a strong bargaining chip: the potential collapse of OpenAI. They may have been able to leverage that (after all, it was a very credible threat after the firing: a costly signal!), or they may have been scared off by the reputational damage to EA & AI Safety of doing this. Altman & co. surely played that part well.

Pure speculation (maybe just cope): this was all part of the board's plan. They knew they couldn't fire Altman without huge backlash. They wanted a stronger negotiation position to install a safety-conscious board & get more grip on Altman. They had no other tools. Perhaps they were willing to let OpenAI collapse if negotiations failed. Toner certainly mentioned that 'letting OpenAI collapse could be in line with the charter'. They expected to probably not maintain board seats themselves. They probably underestimated the amount of public backlash and may have made some tactical mistakes. Microsoft & OAI employees probably propped up Altman's bargaining power quite a bit.

We'll have to see what they ended up negotiating. I would be somewhat surprised if they didn't drag anything out of those negotiations

This looks evermore unlikely. I guess I didn't properly account for:

  • the counterfactual to OpenAI collapse was much of it moving to Microsoft, which the board would've wanted to prevent
  • the board apparently not producing any evidence of wrongdoing (I find this very surprising)

Nevertheless, I think speculating on internal politics can be a valuable exercise - being able to model the actions & power of strong bargainers (including bad faith ones) seems a valuable skill for EA.

Still short on details. Note that this says "in principal" and "initial board". Finalization of this deal could still require finding 3 more board members to replace Helen and Tasha and Ilya who they'd still be happy with to have EA/safety voices. We'll have to wait to see what shakes out.

Curated and popular this week
Relevant opportunities