status: kind of rambly but I wanted to get this out there in case it helps
This week's events triggered in me some soul-searching, wondering whether effective altruism even makes sense as a coherent thing anymore.
The reason I thought EA might break up or dissolve was something like: EA mostly attracted naive maximizer-types ("do the Most Good, using reasoning"), but now it's obvious that the idea of maximizing goodness doesn't work in practice--we have a really clear example of where trying to do that fails (SBF if you attribute pure motives to him); as well as a lot of recent quotes from EA luminaries saying that you shouldn't do that. I didn't see what else holds us together besides the maximizing thing.
But I was kind of ignoring the reasoning thing! I thought about it, and I think that we can make minimal changes: The framing I like is "Do good, using REASONING". With capital letters :)
I think deleting "the most" is a change we should have made a long time ago; few of the important people in EA were claiming that they were doing the most good anyway. And EA at its core is about reasoning: reasoning carefully, using evidence; thinking about first-order and second-order effects; comparing options in front of you; argument and debate. The simpler phrasing of this new mission is intended to make reasoning stand out.
If this direction is adopted, I have the following hopes:
- that EA will become a "bigger tent," accepting of more types of people doing more types of good things in the world and reasoning about them. e.g., we'll welcome anyone who is trying to do good, and is open to talking through the 'why' behind what they are doing
- that naive utilitarian maximizers will go away or be a bit more humble :)
- that people will put more emphasis on developing and relying on their own reasoning processes, and rely less on the reasoning of others to make big decisions in their lives.
- that cause prioritization will get less emphasis, especially career cause prioritization (I think the maximizing thingy regularly causes people to make bad career decisions)
(Some color on the final one: I've had a blog post brewing for a long time against strong career cause prio but haven't really managed to write it up in a convincing way. e.g., I think AI is a bad career direction for a lot of people, but young EAs are convinced to try it anyway because AI is held up as the priority path and they'll have so much more impact if they make it. This seems bad for lots of reasons which I will try to write up in a post if I can ever figure out how to articulate them.)
Anyway, I think the above hopes, if they pan out, will make the community stronger. And, though I am normally loath to argue about optics, I do think this change would counter most of the arguments that you regularly see in news media against EA principles (such as that EA is about dangerous maximizing, or that it's only for elites, or that young people's careers are affected in unstable/chaotic ways when they encounter EA).
I am not fully sure, and it's a bit late. Here are some thoughts that came to mind on thinking more about this:
I think I do personally believe if you actually think hard about the impact, few things matter, and also that the world is confusing and lots of stuff turns out to be net-negative (like, I think if you take AI X-risk seriously a lot of stuff that seemed previously good in terms of accelerating technological progress now suddenly looks quite bad).
And so, I don't even know whether a community that just broadly encourages people to do things that seem ambitious and good ends up net-positive for the world, since the world does indeed strike me as the kind of place that has lots of crucial considerations that suddenly invert the sign on various things, and I am primarily excited about EA as a place that can collectively orient towards those crucial considerations and create incentives and systems that align with those crucial considerations.
I am also separately excited about a community that just helps people reason better, but indeed one of the key things I would try to get across in such a community is the contingency of the goodness of various actions in the world, and that the world is confusing and heavy-tailed. Making for a world where you really have to make the right decisions, or you might very well end up having caused great harm, or ended up missing out on extremely great benefits.