Personal opinion only. Inspired by filling out the Meta coordination forum survey.
Epistemic status: Very uncertain, rough speculation. I’d be keen to see more public discussion on this question
One open question about the EA community is it’s relationship to AI safety (see e.g. MacAskill). I think the relationship EA and AI safety (+ GHD & animal welfare) previously looked something like this (up until 2022ish):[1]
With the growth of AI safety, I think the field now looks something like this:
It's an open question whether the EA Community should further grow the AI safety field, or whether the EA Community should become a distinct field from AI safety. I think my preferred approach is something like: EA and AI safety grow into new fields rather than into eachother:
- AI safety grows in AI/ML communities
- EA grows in other specific causes, as well as an “EA-qua-EA” movement.
As an ideal state, I could imagine the EA community being in a similar state w.r.t AI safety that it currently has in animal welfare or global health and development.
However I’m very uncertain about this, and curious to here what other people’s takes are.
- ^
I’ve ommited non-AI longtermism, along with other fields, for simplicity. I strongly encourage not interpreting these diagrams too literally
Tom - you raise some fascinating issues, and your Venn diagrams, however impressionistic they might be, are useful visualizations.
I do hope that AI safety remains an important part of EA -- not least because I think there is some important, under-explored overlap between AI safety and the other key cause areas, global health & development, and animal welfare.
For example, I'm working on an essay about animal welfare implications of AGI. Ideally, advanced AI wouldn't just be 'aligned' with human interests, but with the interests of the other 70,000 species of sentient vertebrates (and the sentient invertebrates). But very little has been written about this so far. So, AI safety has a serious anthropocentrism bias that needs challenging. The EAs who have worked on animal welfare could have a lot to say about AI safety issues in relation to other species.
Likewise, the 'e/acc' cult (which dismisses AI safety concerns, and advocates AGI development ASAP), often argues that there's a moral imperative to develop AGI, in order to promote global health and development (e.g. 'solving longevity' and 'promoting economic growth'). EA people who have worked on global health and development could contribute a lot to the debate over whether AGI is strictly necessary to promote longevity and prosperity.
So, the Venn diagrams need to overlap even more!