Personal opinion only. Inspired by filling out the Meta coordination forum survey.
Epistemic status: Very uncertain, rough speculation. I’d be keen to see more public discussion on this question
One open question about the EA community is it’s relationship to AI safety (see e.g. MacAskill). I think the relationship EA and AI safety (+ GHD & animal welfare) previously looked something like this (up until 2022ish):[1]
With the growth of AI safety, I think the field now looks something like this:
It's an open question whether the EA Community should further grow the AI safety field, or whether the EA Community should become a distinct field from AI safety. I think my preferred approach is something like: EA and AI safety grow into new fields rather than into eachother:
- AI safety grows in AI/ML communities
- EA grows in other specific causes, as well as an “EA-qua-EA” movement.
As an ideal state, I could imagine the EA community being in a similar state w.r.t AI safety that it currently has in animal welfare or global health and development.
However I’m very uncertain about this, and curious to here what other people’s takes are.
- ^
I’ve ommited non-AI longtermism, along with other fields, for simplicity. I strongly encourage not interpreting these diagrams too literally
I've written a bit about this here and think that they would both be better off if they were more distinct.
As AI safety has grown over the last few years there may have been missed growth opportunities from not having a larger separated identity.
I spoke to someone at EAG London 2023 who didn't realise that AI safety would get discussed at EAG until someone suggested they should go after doing an AI safety fellowship. There are probably many examples of people with an interest in emerging tech risks who would have got more involved at an earlier time if they'd been presented with those options at the beginning.