Personal opinion only. Inspired by filling out the Meta coordination forum survey.
Epistemic status: Very uncertain, rough speculation. I’d be keen to see more public discussion on this question
One open question about the EA community is it’s relationship to AI safety (see e.g. MacAskill). I think the relationship EA and AI safety (+ GHD & animal welfare) previously looked something like this (up until 2022ish):[1]
With the growth of AI safety, I think the field now looks something like this:
It's an open question whether the EA Community should further grow the AI safety field, or whether the EA Community should become a distinct field from AI safety. I think my preferred approach is something like: EA and AI safety grow into new fields rather than into eachother:
- AI safety grows in AI/ML communities
- EA grows in other specific causes, as well as an “EA-qua-EA” movement.
As an ideal state, I could imagine the EA community being in a similar state w.r.t AI safety that it currently has in animal welfare or global health and development.
However I’m very uncertain about this, and curious to here what other people’s takes are.
- ^
I’ve ommited non-AI longtermism, along with other fields, for simplicity. I strongly encourage not interpreting these diagrams too literally
I like how you're characterizing this!
I get that the diagram is just an illustration, and isn't meant to be to scale, but the EA portion of the GHD bubble should probably be much, much smaller than is portrayed here (maybe 1%, because the GHD bubble is so much bigger than the diagram suggests). This is a really crude estimate, but EA spent $400 million on GHD in 2021, whereas IHME says that nearly $70 billion was spent on "development assistance for health" in 2021, so EA funding constitutes a tiny portion of all GHD funding.
I think this matters because GHD EAs have lots and lots of other organizations/spaces/opportunities outside of EA that they can gravitate to if EA starts to feel like it's becoming dominated by AI safety. I worry about this because I've talked to GHD EAs at EAGs, and sometimes the vibe is a bit "we're not sure this place is really for us anymore" (especially among non-biosecurity people). So I think it's worth considering: if the EA community further grows the AI safety field, is this liable to push non-AI safety people—especially GHD people, who have a lot of other places to go—out of EA? And if so, how big of a problem is that?
I assume it would be possible to analyze some data on this, for instance: are GHD EAs attending fewer EAGs? Do EAs who express interest in GHD have worse experiences at EAGs, or are they less likely to return? Has this changed over time? But I'd also be interested in hearing from others, especially GHD people, on whether the fact that there are lots of non-EA opportunities around makes them more likely to move away from EA if EA becomes increasingly focused on AI safety.
Also the $70 billion on development assistance for health doesn't include other funding that contributes to development.