Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns, there has recently been a fair amount of discussion among EAs whether it would make sense to rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes. In March 2023, Open Philanthropy’s Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.

In the end, the group only ended up having two meetings, in part because it proved more difficult than expected to surface key action-relevant disagreements. Prior to the first session, participants circulated relevant memos and their initial thoughts on the topic. The group also did a small amount of evidence-gathering on how the FTX collapse has impacted the perception of EA among key target audiences. At the end of the process, working group members filled in an anonymous survey where they specified their level of agreement with a list of ideas/hypotheses that were generated during the two sessions.[1] This included many proposals/questions for which this group/its members aren’t the relevant decision-makers, e.g. proposals about actions taken/changes made by various organisations. The idea behind discussing these wasn’t for this group to make any sort of direct decisions about them, but rather to get a better sense of what people thought about them in the abstract, in the hope that this might sharpen the discussion about the broader question at issue.

Some points of significant agreement:

  • Overall, there seems to have been near-consensus that relative to the status quo, it would be desirable for the movement to invest more heavily in cause-area-specific outreach, at least as an experiment, and less (in proportional terms) in outreach that uses EA/EA-related framings. At the same time, several participants also expressed concern about overshooting by scaling back on forms of outreach with a strong track-record and thereby “throwing out the baby with the bathwater”, and there seems to have been consensus that a non-trivial fraction of outreach efforts that are framed in EA terms are still worth supporting.
    • Consistently with this, when asked in the final survey to what extent the EA movement should rebalance its portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes, responses generally ranged from 6-8 on a 10-point scale (where 5=stick with the status quo allocation, 0=rebalance 100% to outreach using EA framings, 10=rebalance 100% to outreached framed in terms of constituent causes), with one respondent selecting 3/10.
  • There was consensus that it would be good if CEA replaced one of its (currently) three annual conferences with a conference that’s explicitly framed as being about x-risk or AI-risk focused conference. This was the most concrete recommendation to come out of this working group. My sense from the discussion was that this consensus was mainly driven by people agreeing that there would be value of information to be gained from trying this; I perceived more disagreement about how likely it is that this would prove a good permanent change.
    • In response to a corresponding prompt (“ … at least one of the EAGs should get replaced by an x-risk or AI-risk focused conference …”), answers ranged from 7-9 (mean 7.9), on a scale where 0=very strongly disagree, 5=neither agree nor disagree, 10=very strongly agree.
  • There was consensus that CEA should continue to run EAGs.
    • In response to the prompt “CEA should stop running EAGs, at least in their current form”, all respondents selected responses between 1-3 (on a scale where 0=strongly disagree, 5=neither agree nor disagree, 10=strongly agree).
    • Note that there is some potential tension between this and the fact that (as discussed below) three respondents thought that CEA should shift to running only conferences that are framed as being about specific cause areas/sub-questions (as opposed to about EA). Presumably, the way to reconcile this is that according to these respondents, running EAGs (including in their current form) would still be preferable to running no conferences at all, even though running conferences about specific cause areas would be better.
  • There was consensus that EAs shouldn’t do away with the term “effective altruism.”
    • Agreement with the prompt “We (=EAs) should “taboo” the term “effective altruism” ranged from 0-3, on a scale where 0=very strongly disagree, 5=neither agree nor disagree, 10=very strongly agree.
  • There was consensus that the damage to the EA brand from the FTX collapse and associated events has been meaningful but non-catastrophic.
    • On a scale where 0=no damage, 5=moderate damage, 10=catastrophic damage, responses varied between 3-6, with a mean of 4.5 and a mode of 4/10.
  • There was near-consensus that Open Phil/CEA/EAIF/LTFF should continue to fund EA group organisers.
    • Only one respondent selected 5/10 in response to the prompt “Open Phil/CEA/EAIF/LTFF should stop funding EA group organisers”, everyone else selected numbers between 1-3 (on a scale where 0=strongly disagree, 5=neither agree nor disagree, 10=strongly agree).
  • There was near-consensus that Open Phil should generously fund promising AI safety community/movement-building projects they come across, and give significant weight to the value of information in doing so.
    • Seven respondents agreed with a corresponding prompt (answers between 7-9), one neither agreed nor disagreed.
  • There was near-consensus that at least for the foreseeable future, it seems best to avoid doing big media pushes around EA qua EA.
    • Seven respondents agree with a corresponding prompt (answers between 6-8), and only one disagreed (4).

Some points of significant disagreement:

  • There was significant disagreement whether CEA should continue to run EAGs in their current form (i.e. as conferences framed as being about effective altruism), or whether it would be better for them to switch to running only conferences that are framed as being about specific cause areas/subquestions.
    • Three respondents agreed with a corresponding prompt (answers between 6-9), i.e. agreed that EAGs should get replaced in this manner; the remaining five disagreed (answers between 1-4).
  • There was significant disagreement whether CEA should rename the EA Forum to something that doesn’t include the term “EA” (e.g. “MoreGood”).
    • Three respondents agreed with a corresponding prompt (answers between 6-8), i.e. thought that the Forum should be renamed in such a way, the remaining five disagreed (answers between 1-4).
  • There was significant disagreement whether 80k (which was chosen as a concrete example to shed light on a more general question that many meta-orgs run into) should be more explicit about its focus on longtermism/existential risk. 
    • Five respondents agreed with a corresponding prompt (answers between 6-10), two respondents disagreed (answers between 2-4), one neither agreed nor disagreed.
    • Relatedly, in response to a more general prompt about whether a significant fraction of EA outreach involves understating the extent to which these efforts are motivated by concerns about x-risk specifically in a way that is problematic, 6 respondents agreed (answers between 6-8) and two disagreed (both 3).
  • There was significant disagreement whether OP should start a separate program (distinct from Claire’s and James’ teams) focused on “EA-as-a-principle”/”EA qua EA”-grantmaking.
    • Five respondents agreed with a corresponding prompt (answers between 6-9), two respondents disagreed (answers between 2-4), one neither agreed nor disagreed.

As noted above, this wasn’t aiming to be a decision-making group (instead, the goal was to surface areas of agreement and disagreement from different people and teams and shed light on potential cruxes where possible), so the working group per se isn’t planning particular next steps. That said, a couple next steps that are happening that are consistent with the themes of discussion above are:

  • CEA (partly prompted by Open Phil) has been exploring the possibility of switching to having one of the EAG-like events next year be explicitly focused on existential risk, as touched on above.
  • More generally, Open Phil’s Longtermist EA Community Growth team expects to rebalance its field-building investments by proportionally spending more on longtermist cause-specific field building and less on EA field building than in the past, though it's currently still planning to continue to invest meaningfully in EA field building, and the exact degree of rebalancing is still uncertain. (The working group provided helpful food for thought on this, but the move in that direction was already underway independently.)

I’m not aware of any radical changes planned by any of the participating organisations, though I expect many participants to continue thinking about this question and monitoring relevant developments from their own vantage points.

  1. ^

    Respondents were encouraged to go with their off-the-cuff guesses and not think too hard about their responses, so these should be interpreted accordingly.

140

0
0

Reactions

0
0

More posts like this

Comments18
Sorted by Click to highlight new comments since: Today at 10:00 AM

I found this post very informative. Thank you for sharing.

Some miscellaneous questions:

There was significant disagreement whether OP should start a separate program (distinct from Claire’s and James’ teams) focused on “EA-as-a-principle”/”EA qua EA”-grantmaking.

1. Is there information on why Open Phil originally made the decision to bifurcate community growth funding between LT and GHWB? (I've coincidentally been trying to better understand this and was considering asking on the Forum!) My impression is that this has had extreme shaping effects on EA community-building efforts, possibly more so than any other structural decision in EA.

There was consensus that it would be good if CEA replaced one of its (currently) three annual conferences with a conference that’s explicitly framed as being about x-risk or AI-risk focused conference.

Open Phil’s Longtermist EA Community Growth team expects to rebalance its field-building investments by proportionally spending more on longtermist cause-specific field building and less on EA field building than in the past

2. There are two perspectives that seem in opposition here:

The first is that existing organizations that have previously focused on "big tent EA" should create new x-risk programming in the areas they excel (e.g. conference organizing) and it is okay that this new x-risk programming will be carried out by an EA-branded organization.

The second is that existing organizations that have previously focused on "big tent EA" should, to some degree, be replaced by new projects that are longtermist in origin and not EA-branded.

I share the concern of "scaling back on forms of outreach with a strong track-record and thereby 'throwing out the baby with the bathwater.'" But even beyond that, I'm concerned that big tent organizations with years of established infrastructure and knowledge may essentially be dismantled and replaced with brand new organizations, instead of recruiting and resourcing the established organizations to execute new, strategic projects. Just like CEA's events team is likely better at arranging an x-risk conference than a new organization started specifically for that purpose, a longstanding regional EA group will have many advantages in regional field-building compared to a brand-new, cause-specific regional group. We are risking losing infrastructure that took years to develop, instead fo collectively figuring out how we might reorient it.

In March 2023, Open Philanthropy’s Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.

3. Finally, I would love to see a version of this that incorporates leaders of cause area and big tent  "outreach/recruitment/movement-building" organizations who engage "on the ground" with members of the community. I respect the perspectives of everyone involved. I also imagine they have a very different vantage point than our team at EA NYC and other regional organizations. We directly influence hundreds of people's experiences of both big-tent EA and cause-specific work through on-the-ground guidance and programming, often as one of their first touchpoints to both. My understanding of the value of cause-specific work is radically different from what it would have been without this in-person, immersive engagement with hundreds of people at varying stages of the engagement funnel, and at varying stages of their individual progress over years of involvement. And though I don't think this experience is necessary to make sound strategic decisions on the topics discussed in the post, I'm worried that the disconnect between the broader "on the ground" EA community and those making these judgments may lead to weaker calibration.

As a biased mostly near termist (full disclosure), I've got some comments and questions ;)

First a concern about the framing
"Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns, there has recently been a fair amount of discussion among EAs whether it would make sense to rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes."

This framing for the discussion seems a bit unclear. First I don't see the direct logical connection between "Prompted by the FTX collapse, the rapid progress in AI, and increased mainstream acceptance of AI risk concerns"  and "rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes."  There must be a some implied assumptions filling the gap between these two statements that I'm missing, its certainly not A + B= C. I'm guessing it something like FTX collapse causing potential significant reputational loss to the EA brand  etc. I think being explicit is important when framing a discussion.

Second, when we are talking about "focus on the constituent causes", and "cause specific" does that practically mean growth in AI Safety focused groups while general EA groups remain, or further specialisation within EA with Global health / Animal advocacy / Climate change / Biorisk etc?  (I might be very wrong here). Does "constituent cause" and "cause specific" mostly translate to "AI safety" in the context of this article or not?
 

One other comment that concerned me was that among this group there was a "consensus that a non-trivial fraction of outreach efforts that are framed in EA terms are still worth supporting."  This is quite shocking to me, the idea that EA framed outreach should perhaps be downgraded from the status quo (most of the outreach) to "non-trivial" (which I interpret as very little of the outreach). That's a big change which personally I don't like and I wonder what the wider community thinks.

it already seems like a lot (the majority?) of community building is bent towards AI safety, so its interesting that the push from EA thought leaders seems to be to move further in this direction.

  • As this post itself states, 80,000 hours in practise seems pretty close to an AI/long termist career advice platform , here are their latest 3 releases.

  • There have already been concerns raised that EA university groups can often intentionally or unintentionally push AI safety as the clearly most important cause, to the point where it may be compromising epistemics.
     

Finally this didn't sit well with me "There was significant disagreement whether OP should start a separate program (distinct from Claire’s and James’ teams) focused on “EA-as-a-principle”/”EA qua EA”-grantmaking. Five respondents agreed with a corresponding prompt (answers between 6-9), two respondents disagreed (answers between 2-4), one neither agreed nor disagreed."

These discussions are important but I don't love the idea of the most important discussion with the important people steering the EA ship being led by Openphil (a funding organisation), rather than perhaps by CEA or even perhaps a wider forum. Should the most important discussions about the potential future of a movement should be led by a funding body?

Things I agreed with/liked
- I instinctively like the idea of an AI conference, as maybe that will mean the other conferences have a much higher proportion of EAs who are into other things.
- More support to AI safety specific groups in general. Even as a near termist, that makes a lot of sense as there is a big buzz about it right now and they can attract non-EA people to the field and provide focus for those groups.
- I'm happy that 5 out of 8 disagreed with renaming the forum (although was surprised that even 3 voted for it). For branding/understanding/broadchurch and other reasons I struggle to see positive EV in that one.
- I'm happy that 5 out of the 8 agreed that 80,000 hours should be more explicit about its longtermism focus. It feels a bit disingenous at the moment - although I know that isn't the intention.

Looking forward to more discussion along these lines!

I've written about this idea before FTX and think that FTX is a minor influence compared to the increased interest in AI risk.

My original reasoning was that AI safety is a separate field but doesn't really have much movement building work being put into it outside of EA/longtermism/x-risk framed activities. 

Another reason why AI takes up a lot of EA space, is that there aren't many other places to go to discuss these topics, which is bad for the growth of AI safety if it's hidden behind donating 10% and going vegan and bad for EA if it gets overcrowded by something that should have it's own institutions/events/etc.

"Which is bad for the growth of AI safety if it's hidden behind donating 10% and going vegan"

This may be true and the converse is also possible concurrently, with the growth of giving 10% and going vegan potentially being hidden at times behind AI safety ;)

From an optimistic angle "Big tent EA" and AI safety can be synergistic - much AI safety funding comes from within the EA community. A huge reason those hundreds of millions are available, is because the AI safety cause grew out of and is often melded with founding EA principles, which includes giving what we can to high EV causes. This has motivated people to provide the very money EA safety work relies on.

Community dynamics are complicated and I don't think the answers are straightforward.

Some added context on the 80k podcasts:

At the beginning of the Jan Leike episode, Rob says:


Two quick notes before that:

We’ve had a lot of AI episodes in a row lately, so those of you who aren’t that interested in AI or perhaps just aren’t in a position to work on it, might be wondering if this is an all AI show now.

But don’t unsubscribe because we’re working on plenty of non-AI episodes that I think you’ll love — over the next year we plan to do roughly half our episodes on AI and AI-relevant topics, and half on things that have nothing to do with AI.

What happened here is that in March it hit Keiran and Luisa and me that so much very important stuff had happened in the AI space that had simply never been talked about on the show, and we’ve been working down that coverage backlog, which felt pretty urgent to do.

But soon we’ll get back to a better balance between AI and non-AI interviews. I’m looking forward to mixing it up a bit myself.


 

I appreciate the open communication shared in your post. However, I'd like to express a few reservations regarding the makeup of the working group. I've observed that a significant portion comprises either current or former trustees and senior executives from Effective Ventures. Considering that this organization has faced challenges in management and is presently under the scrutiny of the Charity Commission, this does raise concerns. Moreover, the absence of a representative from the animal welfare sector is noteworthy. While I recognize that the funding is derived from OP's own resources, the outcomes have broad implications for the EA community. For many, it could influence pivotal career decisions. Thus, the responsibility associated with such initiatives cannot be overstated.

Agree. Inviting at least one person from a major neartermist organisation in EA such as Charity Entrepreneurship would have been helpful, to represent all "non-longtermists" in EA.

(disclaimer: I'm friends with some CE staff but not affiliated with the org in any way, and I lean towards longtermism myself)

Also appreciate the transparency, thanks Bastian!

There was near-consensus that Open Phil should generously fund promising AI safety community/movement-building projects they come across

Would you be able to say a bit about to what extent members of this working group have engaged with the arguments around AI safety movement-building potentially doing more harm than good? For instance, points 6 through 11 of Oli Habryka's second message in the “Shutting Down the Lightcone Offices” post (link). If they have strong counterpoints to such arguments, then I imagine it would be valuable for these to be written up.

(Probably the strongest response I've seen to such arguments is the post “How MATS addresses ‘mass movement building’ concerns”. But this response is MATS-specific and doesn't cover concerns around other forms of movement building, for example, ML upskilling bootcamps or AI safety courses operating through broad outreach.)

I think all the questions here are important and nontrivial. However, I'm concerned about taking large actions too quickly when changing plans based on high-level strategic considerations. 

  1. Views are likely to change quite a bit and grow more nuanced over time. We should expect to take most quick actions when we are overly confident about a particular view.
  2. Community infrastructure is composed of many organisations and projects that have long-term plans (many months - years). Changing directions abruptly, especially in regards to funding but also in messaging and branding, can hinder the success of such projects.
  3. Perhaps most importantly, people in the community and workers at EA orgs need stability. Currently, I worry that many people in the community feel poor job security and that what they are working on now will indeed be considered important/relevant later. 

So I think we need to have much more strategic clarity, but to make sure to manage transitions carefully and over long time horizons. 

to rebalance the movement’s portfolio of outreach/recruitment/movement-building activities away from efforts that use EA/EA-related framings and towards projects that instead focus on the constituent causes. In March 2023, Open Philanthropy’s Alexander Berger invited Claire Zabel (Open Phil), James Snowden (Open Phil), Max Dalton (CEA), Nicole Ross (CEA), Niel Bowerman (80k), Will MacAskill (GPI), and myself (Open Phil, staffing the group) to join a working group on this and related questions.

In the proposals discussed, was the idea that non-AI-related causes would decrease the share of support they received from current levels? Or would eg the EAG replacement process be offset by making one of the others non-AI focused (or increase the amount of support those causes received in some other way)?

There was significant disagreement whether 80k (which was chosen as a concrete example to shed light on a more general question that many meta-orgs run into) should be more explicit about its focus on longtermism/existential risk. 

I have to say, this really worries me. It seems like it should be self-evidently good after FTX and all the subsequent focus on honesty and virtue that EA organisations should be as transparent as possible about their motivations. Do we know what the rationale of the people who disagreed was?

Hey, I wasn’t a part of these discussions, but from my perspective (web director at 80k), I think we are transparent about the fact that our work comes from a longtermist perspective that suggests that existential risks are the most pressing issues. The reason we try to present, which is also the true reason, is that we think these are the areas where many of our readers, and thereofre we, can make the biggest positive impact.

Here are some of the places we talk about this:

1. Our problem profiles page (one of our most popular pages) explicitly says we rank existential risks as most pressing (ranking AI first) and explains why - both at the very top of the page "We aim to list issues where each additional person can have the most positive impact. So we focus on problems that others neglect, which are solvable, and which are unusually big in scale, often because they could affect many future generations — such as existential risks. This makes our list different from those you might find elsewhere." and more in the FAQ, as well as in the problem profiles themselves.

2. We say at the top of our "priority paths" list that these are aimed at people who "want to help tackle the global problems we think are most pressing", linking back to the problems ranking.

3. We also have in-depth discussions of our views on longtermism and the importance of existential risk in our advanced series. 

So we are aiming to be honest about our motivations and problem prioritization, and I think we succeed. For what it's worth I don't often come across cases of people who have misconceptions about what issues we think are most pressing (though if you know of any such people please let me know!). 

That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should. 

One way of interpreting the call to make our longtermist perspective more “explicit": I think some people think we should pitch our career advice exclusively at longtermists, or people who already want to work on x-risk. We could definitely move further in this direction, but I think we have some good reasons not to, including:

  1. We think we offer a lot of value by introducing the ideas of longtermism and x-risk mitigation to people who aren’t familiar with these ideas already, and making the case that they are important – so narrowly targeting an audience that already shares these priorities (a very small number of people!) would mean leaving this source of impact on the table.
  2. We have a lot of materials that can be useful to people who want to do good in their careers but won't necessarily adopt a longtermist perspective. And insofar as having EA be “big tent” is a good thing (which I tend to think it is though am not that confident), I'm happy 80k introduces a lot of people who will take different perspectives to EA.
  3. We are cause neutral[1] – we prioritise x-risk reduction because we think it's most pressing, but it’s possible we could learn more that would make us change our priorities. Since we’re open to that, it seems reasonable not to fully tie our brand to longtermism or existential risk. It might even be misleading to open with x-risk, since it'd fail to communicate that we are prioriritsing that because of our views about the pressingess of existential risk reduction. And since the value proposition of our site for readers is in part to help them have more impact, I think they want to know which issues we think are most pressing.

[1] Contrast with unopinionated about causes. Cause neutrality in this usage means being open to prioritising whatever causes you think will allow you to help others the most, which you might have an opinion on.

That said, I basically agree we could make these views more obvious! E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should.

Update: we added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.

"E.g. we don't talk much on the front page of the site or in our 'start here' essay or much at the beginning of the career guide. I'm open to thinking we should. "

I agree with this, and feel like the best transparent approach might be to put your headline findings on the front page and more clearly, because like you say you do have to dig a surprising amount to find your headline findings.

Something like (forgive the average wording)

"We think that working on longtermists causes is the best way to do good, so check these out here..."

Then maybe even as a caveat somewhere (blatant near termist plug) "some people believe near termist causes are the most important, and others due to their skills or life stage may be in a better position to work on near term causes. If you're interested in learning more about high impact near termist causes check these out here .."

Obviously as a web manager you could do far better with the wording but you get my drift!

Copying from my comment above:

Update: we've now added some copy on this to our 'about us' page, the front page where we talk about 'list of the world' most pressing problems', our 'start here' page, and the introduction to our career guide.

Thanks : ) we might workshop a few ways of getting something about this earlier in the user experience.

I have to say, this really worries me.

I can't speak for other people who filled out the survey but: I agree that orgs should be transparent about their motivations. 

The questions asks (basically) "should 80k be more transparent [than it currently is]", and I think I gave a "probably not" type answer, because I think that 80k is already fairly transparent about this (e.g. it's pretty clear when you look at their problem profiles or whatever).

This is a useful post! Are there any of the discussions or justifications behind various people's responses (especially in points of disagreement) that might be shareable? Did people change their mind or come to a consensus after the discussions, and if so, in which directions?

Thanks for the post!

There was consensus that it would be good if CEA replaced one of its (currently) three annual conferences with a conference that’s explicitly framed as being about x-risk or AI-risk focused conference.

In response to a corresponding prompt (“ … at least one of the EAGs should get replaced by an x-risk or AI-risk focused conference …”)

I'm curious if you felt the thrust was that the group thought it's good if CEA in particular replace the activity of running its 3rd EAG with running an AI safety conference, or that there should be an AI safety conference?

In general when we talk about 'cause area specific field building', the purpose that makes most sense to me is to build a community around those cause areas, which people who don't buy the whole EA philosophy can join if they spot a legible cause they think is worthwhile working on.

I'm a little hesitant to default to repurpose existing EA institutions, communities and events to house the proposed cause area specific field building. It seems to me that the main benefit of cause area specific field building is to potentially build something new, fresh and separate from the other cultural norms and beliefs that the EA community brings with it.

Perhaps the crux for me is "is this a conference for EAs interested in AI safety, or is it a conference for anyone interested in AI safety?" If the latter, this points away from an EA-affiliated conference (though I appreciate there are pragmatic questions around "who else would do it"). A fresh feel and new audience might still be achievable in the case that CEA runs the conference ops, but I imagine it would be important to bear in mind during CEA's branding, outreach and choices made during the execution of such a conference.

Curated and popular this week
Relevant opportunities