Hide table of contents

Epistemic status: This post is an edited version of an informal memo I wrote several months ago. I adapted it for the forum at the prompting of EA strategy fortnight. At the time of writing I conceived of its value as mostly in laying out considerations / trying to structure a conversation that felt a bit messy to me at the time, though I do give some of my personal takes too.

I went back and forth a decent amount about whether to post this - I'm not sure about a lot of it. But some people I showed it to thought it would be good to post, and it feels like it's in the spirit of EA strategy fortnight to have a lower bar for posting, so I'm going for it.

Overall take

Some people argue that the effective altruism community should focus more of its resources on building cause-specific fields (such as AI safety, biosecurity, global health, and farmed animal welfare), and less on effective altruism community building per se. I take the latter to mean something like: community building around the basic ideas/principles, and which invests in particular causes always with a more tentative attitude of "we're doing this only insofar as/while we're convinced this is actually the way to do the most good." (I'll call this "EA per se" for the rest of the post.)

I think there are reasons for some shift in this direction. But I also have some resistance to some of the arguments I think people have for it.

My guess is that 

  • Allocating some resources from "EA per se" to field-specific development will be an overall good thing, but 
  • My best guess (not confident) is that a modest reallocation is warranted, and
  • I worry some reasons for reallocation are overrated.

In this post I'll 

  1. Articulate the reasons I think people have for favouring shifting resources in this way (just below), and give my takes on them (this will doubtless miss some reasons).
  2. Explain some reasons in favour of continuing (substantial) support for EA per se. 

Reasons I think people might have for a shift away from EA per se, and my quick takes on them

1. The reason: The EA brand is (maybe) heavily damaged post FTX — making building EA per se less tractable and less valuable because getting involved in EA per se now has bigger costs.

My take: I think how strong this is basically depends on how people perceive EA now post-FTX, and I'm not convinced that the public feels as badly about it as some other people seem to think. I think it's hard to infer how people think about EA just by looking at headlines or Twitter coverage about it over the course of a few months. My impression is that lots of people are still learning about EA and finding it intuitively appealing, and I think it's unclear how much this has changed on net post-FTX. 

Also, I think EA per se has a lot to contribute to the conversation about AI risk — and was talking about it before AI concern became mainstream — so it's not clear it makes sense to pull back from the label and community now.

I'd want someone to look at and aggregate systematic measures like subscribers to blogs, advising applications at 80,000 Hours, applications to EA globals, people interested in joining local EA groups, etc. (As far as I know as of quickly revising this in June, these systematic measures are actually going fairly strong, but I have not really tried to assess this. These survey responses seem like a mild positive update on public perceptions.) 

Overall, I think this is probably some reason in favour of a shift but not a strong one.

2. The reason: maybe building EA per se is dangerous because it attracts/boosts actors like SBF. (See: Holden's last bullet here)

My take: My guess is that this is a weak-ish reason – though I'm unsure, and it's probably still some reason. 

In particular, I don't think it's going to be that much easier to avoid attracting/boosting SBF-like actors in specific fields compared to building EA in general (holding other governance/cultural changes fixed). And I'd expect the mitigation strategies we should take on this front will be not that different/not that differently effective on the two options.

An argument against my take: being cause neutral means focusing more on 'the good' per se (instead of your specific way of doing good), and that is associated with utilitarianism, and that is SBF-like-actor-attracting. There's probably something to this; I just wouldn't be that surprised if SBF-like-actors would be attracted by a field who's mission was to save the world from a catastrophic pandemic, etc. to ~the same degree. Why? Something like: it's the "big stakes and ambitions" of EA that has an effect here, rather than the cause neutrality/focus on 'the good' per se. But this is speculation!

3. The reason: EA is too full of stakeholders. The community has too many competing interests, theories of impact, and stakeholders, and it’s too tiring, draining, resource-intensive, and complicated to work with.

My take: I sort of suspect this is motivating some people maybe subconsciously. I think it's probably a mostly weak reason. 

I do feel a lot of sympathy with this. But individual field building will also generate stakeholders - at least if its doing something that really matters! Stakeholders can also be helpful, and it's often ambiguous at the time whether they're helping or hindering.

Though I do buy that, especially at first, there will probably be fewer stakeholders in specific fields, especially if they vibe less 'community'-y. (Though if it's the "community" aspect that's generating stakeholders, the move might instead be to move toward a less community-centered EA per se, rather than zoom in on particular causes, something I've also seen people argue for.)

4. The reason: people have greater confidence that some issues are much more pressing than others, which means there's less value in cause neutrality, searching for 'Cause X', and thinking about things from the ground up, and so less value in the effective altruism community per se as compared to the specific causes.

My take: this is a strong reason to the extent that people's confidence has justifiably increased. 

I think it's "taking a bet"/ risky, but could be worth it. It's a risk even if we invest in building multiple fields, since we'll be reducing investment in what has so far been an engine of new field creation.

4A: The reason: Some people feel more confident we have less time until a potential AI catastrophe, which makes strategies that take a long time to yield returns (which building EA per se does compared to specific fields) less promising.

My take: This is a strong reason insofar as the update is justified, though only to invest more in AI-specific field building, rather than field building in a wide variety of areas (except insofar as they intersect with AI — which maybe most of them do?). 

5. The reason: There’s greater tractability for specific field building compared to before, because (1) AI safety is going mainstream, (2) pandemic risk has somewhat gone mainstream, and (3) there are community members who are now more able to act directly, due to having gained expertise and career capital.

My take: this is a strong reason.

6. The reason:  EA 'throws too many things together under one label' and that is confusing or otherwise bad, so we should get away from that. E.g. from Holden

> It throws together a lot of very different things (global health giving, global catastrophic risk reduction, longtermism) in a way that makes sense to me but seems highly confusing to many, and puts them all under a wrapper that seems self-righteous and, for lack of a better term, punchable?

And from Niel (80,000 Hours slack, shared with permission):

>I worry that the throwing together of lots of things makes us more highlight the “we think our cause areas are more important than your cause areas” angle, which I think is part of what makes us so attack-able.  If we were more like “we’re just here trying to make the development of powerful AI systems go well”, and “here’s another group trying to make bio go well” I think more people would be more able to sidestep the various debates around longtermism, cause prio, etc. that seem to wind people up so much.

> Additionally, throwing together lots of things makes it more likely that PR problems in one area spread to others.

My take: I think this is, on net, not a strong reason.

My guess is that the cause neutrality and 'scout altruism' aspect of EA — the urge to find out what are actually the best ways of doing good — is among its most attractive features. It draws more criticism and blowback, true, because it talks about prioritisation, which is inherently confrontational. But this also makes it substantive and interesting  — not to mention more transparent. And in my experience articles describing EA favourably often seem fascinated by the "EA as a question" characterisation.[1]

Moreover, I think some of this could just be people losing appetite for being "punchable" after taking a beating post-FTX. But being punchable is sometimes worth standing up for what you believe!

I guess this is all to say that I agree that if we shifted more toward specific fields, "more people would be more able to sidestep the various debates around longtermism, cause prio, etc. that seem to wind people up so much", but that I suspect that could be a net loss. I think people find cause prioritisation super intriguing and it's good for people's thinking to have to confront prioritisation questions. 

(Though whether this matters enough depends a lot on how (un)confident you are in specific areas being way more pressing than others. E.g. if you're confident that AI risk is more pressing than anything else, you arguably shouldn't care much about any potential losses here (though even that's unclear). EA per se going forward seems like it has a lot less value in that world. So maybe point (4) is the actual crux here.)

7. The reason: a vague sense that building EA per se isn't going that well even when you put aside the worry about EA per se being dangerous. I don't know if people really think this beyond as implied in reasons (1) and (2) and (6), but I get the sense they do.

My take: I think this is false! If we put aside the 'EA helped cause SBF' point (above), building EA per se is, I think, going pretty well. EA has been a successful set of ideas, lots of super talented and well-meaning and generally great people have been attracted to it, and (a la point (5) above) it's made progress in important areas. It's (still) one of the most intellectually interesting and ethically serious games in town.

You could argue that even without SBF, building EA per se is going badly because EAs contributed to accelerating AI capabilities. That might be right. But that would not be a good reason to reallocate resources from EA per se to building specific fields, because if anything that'd result in more AI safety field building with less critical pressure on it from EAs sceptical it's doing the most good, which seems like it would be more dangerous in this particular way. 

Some reasons for continuing to devote considerable resources to EA per se community building: 

1. The vibe and foundation EA per se encourages more critical thinking, greater intellectual curiosity, more ethical seriousness, and more intellectual diversity vs. specific field-building. 

I think sometimes EA has accidentally encouraged groupthink instead. This is bad, but it seems like we have reason to think this phenomenon would be worse if we were focusing on building specific fields. It’s much harder to ask the question: "Is this whole field / project actually a good idea?" when you're not self-consciously trying to aim at the good per se and the people around you aren't either. The foundational ideas of EA seem much more encouraging of openness and questioning and intellectual/ethical health. 

I think the 'lots of schools of thought within EA'/subcultures thing is also probably good rather than bad, even though I agree there's a limit to it being net good here and I could imagine us crossing it. I think this is related to the stakeholders point.

2. EA enables greater field switching: a lot of people already seem to think they need to have been doing field X forever to contribute to field X in a way that is bad and false. EA probably reduces this effect by making the social/professional/informational flows between a set of fields much tighter than they otherwise would be. 

2a. EA enables more cross-pollination of ideas between fields, and more "finding the intersections". For example, it seems probable that questions like  "how does AI affect nuclear risk" would be more neglected in a world with a smaller EA community, general conferences, shared forums and discussions, etc. 

3. There are weird, nascent problem areas like those on this list that will probably get less attention if/to the extent that we allocate resources away from EA per se and toward a handful of specific fields. I place enough importance on these that it would seem like a loss to me, though it could perhaps be worth it. Again, how much this matters goes in the 'how confident are you that we've identified the most pressing areas' bucket (point 4 above).

4. Depending on how it's done, moving resources from building EA per se to building separate fields could have the result of reducing long-term investment in the highest-impact fields. Why? Because when people are part of EA per se they often try to be 'up for grabs' in terms of what they end up working on — so the fields with the best claim to being highest impact can still persuade them to help with that field. Hence: more people in the community moving toward existential risk interventions over time, because they find the arguments for those issues being most pressing persuasive. If we build separate fields now, we're making it harder for the most pressing problems to win resources from the less pressing problems as things go on — making the initial allocation much more important to get right. 

  1. ^

    I should note though that I could be overweighting this / "typical-minding" here because this is a big part of what attracted me to EA

Comments24
Sorted by Click to highlight new comments since: Today at 12:27 PM

Strong upvote on this - it’s an issue that a lot of people have been discussing, and I found the post very clear!

There’s lots more to say, and I only had time to write something quickly but one consideration is about division of effort with respect to timelines to transformative AI. The longer AI timelines are, the more plausible principles-led EA movement-building looks.

Though I’ve updated a lot in the last couple of years on transformative-AI-in-the-next-decade, I think we should still put significant probability mass on “long” timelines (e.g. more than 30 years). For example, though Metaculus’s timelines have shortened dramatically — and now suggest a best-guess of superintelligence within a decade — the community forecast still puts a 10% chance that even “weakly general” AI is only developed after 2050, and it puts about a 20% chance that the time from weakly general AI to superintelligence is more than 30 years. AI progress could slow down a lot, perhaps because of regulation; there could also be some bottleneck we haven’t properly modelled that means that explosive growth never happens. (Relatedly, some things could happen that could make near-term effort on AI less important: AI alignment could end up being easy, or there could be sufficient effort on it, such that the most important questions are all about what happens post-AGI.)

In very short (e.g. <10yr) timelines worlds, then an AI-safety specific movement looks more compelling. 

In long timelines worlds (e.g. >30 years), EA looks comparatively more promising for a few reasons:

  • EA is more adaptive over time. If our state of knowledge changes, or if the environment changes, such that X thing is no longer the best thing to do, then the EA recommendation changes, and (ideally) someone who follows EA reasoning will switch to the better thing.  
    • This is much more likely to be relevant in long timelines worlds: there are decades for things to change (e.g. geopolitical changes, development of new technologies), for there to be further learning about issues relevant to cause-prioritisation, and for the world as a whole to invest dramatically more into AI safety.
  • Relatedly, EA is accessible to more people. A greater diversity of skills is more useful in long timelines worlds than they are in short timelines worlds, because of greater uncertainty about what will be of most value in long timelines worlds. 
  • Principles-focused EA might have greater long-term compounding benefits (as a result of EAs doing outreach to recruit new EAs, some of whom do outreach to recruit new EAs, in a way similar to the compounding of financial investment over time) than cause-specific movements. I started trying to write this out quantitatively but ran out of time and there are a lot of subtle issues, so I’m not certain how exactly it shakes out; it’s something I’ll have to come back to. (Even better, someone better at maths than me could do this instead!)

This suggests that we might want to focus in particular on both short and fairly long timelines worlds (and pay less attention to in-the-middle timelines worlds). In  short timelines worlds, we have outsized impact because the world hasn’t yet caught up to the importance of safely managing advanced AI and the issue is highly neglected. In fairly long timelines worlds, we get particular benefits from the long-run compounding of EA, and its diversity and adaptability. 

This could look like some people focusing on AI for the next ten years, and then potentially switching after that time for the rest of their careers. Or, and in addition, it could look like some % of EA-minded people focusing wholly on AI, and some % focusing wholly on principles-first EA.

A final consideration concerns student groups in particular.  For someone aged 18 — unless they are truly exceptional — it’ll probably be at least 5 years before they are able to meaningfully contribute to AI safety. If you think that the next 10 years are the particularly-likely time to get an intelligence explosion (because of an expected but unsustainable huge increase in investment into AI, as discussed by Carl Shulman here), then half of that opportunity can’t be capitalised upon by the 18-year-old. This gives an additional boost to the value of EA movement-building vs AI safety specific movement-building when it comes to campus outreach. 

Overall, I agree that given the rapid progress in AI there should be some significant reallocation from EA movement-building to AI in particular, and I think we’re already seeing this happening. I’m currently unsure on whether I expect EA as a whole to ultimately under-correct or over-correct on recent AI developments. 

Thanks for this comment; I found it helpful and agree with a lot of it. I expect the "university groups are disproportionately useful in long timelines worlds" point to be useful to a lot of people.

On this bit:

EA is more adaptive over time... This is much more likely to be relevant in long timelines worlds

This isn't obvious to me. I would expect that short timeline worlds are just weirder and changing more rapidly in general, so being adaptive is more valuable. 

Caricature example: in a short timeline world we have one year from the first sentient LLM to when we achieve value lock in, and in a long timeline world we have 100 years. In the former case EA seems more useful, because we can marshal a bunch of people to drop what they are doing and focus on digital sentience. In the latter case digital sentience probably just becomes an established field, without any need for EA.

There are counterbalancing factors (e.g. more of our resources should probably go towards AI safety, in short timelines worlds) but it seems pretty plausible to me that this nets out in favor of EA being more useful in shorter timelines.[1]

  1. ^

    JP has made a related point to me, and deserves credit for a lot of this idea

I like this comment a whole bunch.

This suggests that we might want to focus in particular on both short and fairly long timelines worlds [...]

I've recently started thinking of this as a playing to your outs strategy, though without the small probabilities that that implies. One other factor in favor of believing that long timelines might happen, and those worlds might be good worlds to focus on, would be that starting very recently it's begun to look possible to actually slow down AI. In those worlds, it's presumably easier to pay an alignment tax, which makes those world more likely to survive.

For someone aged 18 — unless they are truly exceptional — it’ll probably be at least 5 years before they are able to meaningfully contribute to AI safety.

 

This might be true for technical. Less true for things like trying to organise a petition or drum up support for a protest.

Thanks so much for this, I think it's really valuable to have this on the forum and start a structured conversation on this.

To me, the first reason mentioned against separate field building -- declining critical thinking and questioning within subfields -- seems really key.

I think a subfield-oriented loose movement structure would probably lead to decline of epistemic standards and an underplaying of cause prioritization, differences in expected impact, etc as a result of groupthink and appealing to non-EA parts of these fields and I think to some degree this is already happening.

Where would you say this is already happening?

I don't want to turn this into an argument about specific orgs because it is more of a general observation and more about incentive structures.

I have the general impression that single-cause organizations do not face the right incentives for truth seeking and cause prioritization, both in terms of epistemics (everyone working there sharing some beliefs) and organizational strategy and goals (the bar for an org saying that what they worked on has become less important seems v high, indeed fundraising incentives will always point in the other direction).

To focus more on the positive case which I find easier to talk about publicly: Being in a team (research team at FP) where researchers are working on different causes, with different beliefs and different methodologies, brings a lot of benefits in all directions -- e.g. encouraging GHD research to be more risk-neutral (not combining topical engagement with risk aversion, as GiveWell does), to work on different GCRs being more comparative, to work on climate being benchmarked against other risks, etc.

This is not an all-things-considered view, all I am really saying is that I think this argument against sub-field building seems quite important.

I'm pretty into the idea of putting more effort into concrete areas.

I think the biggest reason for is one which is not in your list: it is too easy to bikeshed EA as an abstract concept and fool yourself into thinking that you are doing something good.

Working on object level issues helps build expertise and makes you more cognizant of reality. Tight feedback loops are important to not delude yourself.

I'm curating this post. I think the arguments presented here are not written up anywhere else publicly, and are well-put. I would like to see more discussion of them, and more exploration.

I’d like to add that from my perspective:

  1. global health and development will almost permanently be a pressing cause area

  2. it’s very likely that within our lifetimes, we’ll see enough progress such that biosecurity and farmed animal welfare no longer seem as pressing as global health and development

  3. it’s feasible that AI safety will also no longer seem as pressing as global health and development

  4. new causes may emerge that seem more pressing than global health, biosecurity, AI and farmed animal welfare

  5. growing EA is very important to help more people with “optimiser’s mindsets” switch between cause areas in response to changes in how pressing they are in the future

(but I still think there’s a case for a modest reallocation towards growing cause areas independently)

global health and development will almost permanently be a pressing cause area

I don't find this likely over the medium-long term, unless we have large-scale stagnation or civilizational collapse. 

As a sanity check, 4% economic growth (roughly the rate in Africa in the last 20 years; many developing countries have higher growth rates) over 100 years translates to 50x growth. So under such conditions, the equivalent (positionally) of someone living on $1.50 today for someone in 2123 is someone living on $75/day or $27k/year in 2023 dollars, well into the standards of developed countries today. [1]

Will there still be large economic disparities and/or significant health problems sans intervention? Well, yeah, probably. But in a world that's fabulously wealthy, a dire need case for drastic EA interventions doesn't just rely on market failure but also state and civil society failure as well. So you should expect many problems that today look like global health and development to be solved then via more traditional public health and civil society means.

I also think it's reasonably plausible that progress in public health will outstrip GDP improvements, such that people living on (say) $5k/year in the future will not have many of the public health problems associated with people living on $5k/year today. Both because of specific technological changes and because catchup growth on health might be faster than catchup growth on development (similar to how people in countries with median incomes of $5k/year today do not have to worry about smallpox or polio, or (in most cases) even malaria).[2]

And of course crazy AI stuff might make all of this close to irrelevant.

  1. ^

    This might be too optimistic. But even at 3% GDP growth (roughly global average), we're looking at a 20x increase, or about $10k/year for someone positionally equivalent to living on $1.50/day today.

  2. ^

    Eyeballing some graphs, US real GDP per capita was between 6k and 9k in the 1930s (~9k at the beginning and the end, middle was lower. The Great Depression was rough). There was significant malaria in the US during that time, and of course it'd be multiple decades before smallpox and polio were eradicated in the US.

global health and development will almost permanently be a pressing cause area

Really? You don't think there will come a time perhaps in the next few centuries where pretty much everyone lives above an absolute poverty line and has access to basic health resources? Our World in Data shows the progress that has been made in reducing extreme poverty, and whilst there is more work to do, saying that global health and development will "permanently" be a pressing cause areas seems an exaggeration to me.

Also, if you're a longtermist now, you'll probably be a longtermist later even if we do reduce existential risks from AI and bio etc. There are other longtermist interventions that we are aware of such as improving values, investing for the future, economic growth.

I think when we reach the point where everyone lives above a certain poverty line and has access to basic health resources, the global distribution of wealth will still be very inefficient for maximising welfare, and redistributing resources from the globally richest to the globally poorest / increasing consumption of the poorest will still be one of the best available options to improve aggregate welfare.

Sidenote - but I think it's better for debate to view this as disagreement, not exaggeration. I also don't entirely agree with total utilitarianism or longtermism, if that makes my point of view easier to understand.

I think when we reach the point where everyone lives above a certain poverty line and has access to basic health resources, the global distribution of wealth will still be very inefficient for maximising welfare

Agreed.

redistributing resources from the globally richest to the globally poorest / increasing consumption of the poorest will still be one of the best available options to improve aggregate welfare.

I highly doubt this. Longtermism aside, I find it very hard to accept that redistribution of resources in a world where everyone lives above a certain poverty line would be anywhere near as pressing as reducing animal suffering.

I think it's better for debate to view this as disagreement, not exaggeration.

Fair enough. "Permanently" is a very strong word though so I guess I disagree with its usage.

The quote was "almost permanently," which I took to mean something like: of sufficient permanence that for purposes of the topic of the post -- focusing on cause areas vs. focusing on EA (in the medium run is implied to me) -- we can assume that global health and development will remain a pressing cause area (although possibly relatively less pressing than a new cause area -- point four).

I don't think that's inconsistent with a view that GHD probably won't be a pressing cause area in, say, 250 years. Knowing whether it will or won't doesn't materially affect my answer to the question posed in the original post. (My ability to predict 250 years into the future is poor, so I have low confidence about the importance of GHD at that time. Or just about anything else, for that matter.)

Anyway, I am wondering if part of the disagreement is that we're partially talking about somewhat different things.

I might be wrong but I think "almost" was an addition and not there originally. It still reads weirdly to me.

From the follow-on comments I think freedomandutility expects GHD to be a top cause area beyond 250 years from now. I doubt this and even now I put GHD behind reducing animal suffering and longtermist areas so there does seem to be some disagreement here (which is fine!).

EDIT: actually I am wrong because I quoted the word "almost" in my original comment. Still reads weird to me.

I also think that most future worlds in which humanity has its act together enough to have solved global economic and health security are worlds in which it has also managed to solve a bunch of other problems, which makes cause prioritization in this hypothetical future difficult.

I wouldn't view things like "basic health resources" in absolute terms. As global prosperity and technology increases, the bar for what I'd consider basic health services increases as well. That's true, to a lesser extent, of poverty more generally.

Sure, but there is likely to be diminishing marginal utility of health resources. At a certain point accessibility to health resources will be such that it will be very hard to argue that boosting it further would be a more pressing priority than say reducing animal suffering (and I happen to think the latter is more pressing now anyway).

One could say much the same thing about almost any cause, though -- such as "investing for the future, economic growth" at the end of your comment. The diminishing marginal returns that likely apply to global health in a world with 10x as many resources will generally be applicable there too.

Different causes have different rates at which marginal utility diminishes. Some are huge problems so we are unlikely to have even nearly solved them in a 10x richer world (e.g. wild animal suffering?) and others can just absorb loads of money.

Investing for the future is one such example - we can invest loads of money with the hope that one day it can be used to do a very large amount of good.

Also, in a world where we are 10x richer I'd imagine reducing total existential risk to permanently minuscule levels (existential security) will still be a priority. This will likely take loads of effort and I'd imagine there is likely to always be more we can do (even better institutions, better safeguards etc.). Furthermore, in a world where we are all rich, ensuring safety becomes even more important because we would be preserving a really good world.

Thanks for posting this – I feel like these discussions have been happening in the background, and I would like to see more of it in public.

Minor: your "explain some reasons" link is broken

If EA wanted to invest more effort into specific cause areas, it might want to consider running intro courses in those areas as part of EA virtual programs.

[comment deleted]10mo2
0
1
Curated and popular this week
Relevant opportunities