I'm submitting the below as part of the Red Teaming contest. Any prize money won will be donated to the non-profit I founded (the Rikers Debate Project).
--
I think it makes a bit of sense to start with who I am for this post, because I'm hoping to provide a bit of a curious outsider's perspective on EA in the hopes of offering a somewhat unique perspective.
I consider myself “EA adjacent.” I have a number of good friends (Josh Morrison, Jay Shooster, Alex Silverstein) who are varying parts of the movement (then again, because I’m only adjacent I have no idea if any of you will know these names). I’m a rationalist by nature, did high-level college debate with decent success, and was part of the Moneyball-era baseball analytics movement (so I’m good with the intersection of numbers and logic). I think all of those qualities give me a proclivity toward EA. These days I’m a rather boring complex commercial litigator who does a good amount of pro bono work. My greatest civic contribution is founding (with Josh Morrison and others) the Rikers Debate Project, which I consider to be a rather typical, not especially EA-y non-profit.
I’m writing this because I think I’ve absorbed enough about EA via osmosis to provide a halfway intelligible critique. [1] To state my lack of bona fides: I sometimes read EA stuff accidentally, but neither typically or intentionally. I had to make an account for this forum to post this. I've never heard Will MacAskill speak before, but I have had multiple people tell me about him. I read Peter Singer before I associated him with EA.
I am generally sympathetic to and supportive of the positions of the EA movement. I mean, who can oppose ambitious people coming together to provide efficacious good for the world? It sounds wonderful. In both theory and in practice, I care about the movement (from a distance) and hope it succeeds.
The devil is in the details, as always. I provide the following critique basically in part because I feel guilty that others have done far more to try to help the world than I have, and the least I can do is provide my thoughts to help the movement get better. I caution that the below is based on an outsider's second-hand perspective. Therefore, it may/will get details wrong. But my hope is that the spirit is right.
I have heard that there is a focus in EA lately on longterm problems. You can read a bit about longtermism in Simon Bazelon’s post here. [2] Intuitively, longtermism makes sense. EA, as a collective movement, has a finite amount of resources in the present day, yet it is uniquely (literally) temporally positioned. Therefore, as a % of all resources from hereon out, the current amount of resources should be used to take advantage of the unique temporal position to maximize future utility returns. To invert the hypo a bit—if the movement could go back in time, then using the money on anything else but, say, ending slavery or stopping the Holocaust (or, to be more EA about it, handing out vaccines/cures for the Bubonic plague) would be not just foolhardy, but ethically disastrous. Therefore, we must focus on fat-tail future risks that threaten future life.
I like this thought a lot. I think it makes sense. I write to provide four discrete (but at times overlapping) concerns/cautions w/r/t longtermism. At most, the implication of these criticisms is that there may be a current overcorrection toward longtermism that should be somewhat corrected back in the direction of the prior distribution. This doesn’t mean we stop focusing on longterm risks (far from it), [3] but simply that we recalibrate the risk utility curve, and potentially allocate more resources toward present-ish causes. At the least, I think these criticisms should be discussed and persuasive reasons should be offered for why they are not of serious consideration.
- Political Capital Concern: There is a compelling case to be made that a wildly successful EA movement could do as much good for the world as almost any other social movement in history. Even if the movement is only marginally successful, if the precepts underlying the movement are somewhat sound, the utility implications are enormous.
To that end, it is incredibly worthwhile for the movement to be politically/socially successful. If the movement dies in the present moment, it can do little to help future life. But because helping future people seems abstract and foreign to the everyday person who wants help right now in the present, and because future life is easily otherized, the movement is susceptible to the criticism that it’s not actually helping anyone. Indeed, people in the present day for the most part will consider movements that help future life to be the moral equivalent of not helping anyone (this is, obviously, massively wrong, but still an important observation).
One way to address this political capital concern is to provide direct, tangible utility to present humans. I know this happens in the movement, and my point isn’t to take away from those gains made to help people in the present. Instead, the thought is: when running your utility models, factor this in however you can. Consider that utility translated from EA resources to present life, when done effectively and messaged well, [4] redounds as well on the gains to future life.
- Social Capital Concern: This one might be a bit meandering, but I'll get there, hopefully. I think this is probably the most important criticism of my four. This point is rather meta, and I agree with the point made by Michael Nielsen that it is necessary and healthy for the EA movement to practice consistent self-reflection.[5] [6]
EA proponents should not have to live hermetic lifestyles. In fact, a collective spirit with communal living is healthy and good for the movement. This is not the very tired "EA is a cult" jab, which I find uncompelling. That being said, the movement should be aware of potential pitfalls that come with this approach, and apply appropriate guardrails.
Here is my concern regarding the intersection of EA-as-community and longtermism: focusing on longterm problems is probably way more fun than present ones.[7] Longtermism projects seem inherently more big picture and academic, detached from the boring mundanities of present reality. There is a related concern with this approach that longtermism may fetishize future life, in the sense of seeing ourselves as saviors who will be looked back on by billions in the future with gratitude and outright reverence for caring so much about posterity. [8]
But that aside, if I am correct that longtermism projects are sexier by nature, when you add communal living/organizing to EA, it can probably lead to a lot of people using flimsy models to talk and discuss and theorize and pontificate, as opposed to creating tangible utility, so that they can work on cool projects without having to get their hands too dirty, all while claiming the mantle of not just the same, but greater, do-gooding. “Head to Africa to do charity work? Like a normie who never read Will MacCaskill? Buddy, I’m literally saving a billion lives in 2340 right now.” So individual EA actors, given social incentives brought upon by increased communal living, will want to find reasons to engage in longtermism projects because it will increase their social capital within the community. I don't mean to imply that anyone is doing this consciously/in bad faith, [9] but that doesn't mean it won't happen.
So this concern takes no issue with either longtermism projects or EA-as-community practices, respectively. Instead, the concern is that, when in tandem, the latter will make people gravitate toward the former not out of disciplined dedication to greatest-utility principles, but simply because they seem cool. EA followers may be less immune to animal spirits than the average joe, but they are not immune.
- Muscle Memory Concern: I have founded an organization that helps people. It is a whole lot of work (and I've done it outside of my already stressful job). Aside from providing the core value-add of your organization, you need to worry about the day-to-day running of the organization (people, finances, legal, etc). And it can become increasingly easy for your organization, once it is self-sufficient and not facing existential threats consistently, to get distracted and focus on new projects that seem exciting and fresh.
This is why it's important to have quick muscle memory to get back to your core value-add in case you stray and find your organization lacking its typical punch (here, the steady conversion of resources into efficiently-distributed utility), you want to be able to snap back into it fast, or else the movement can become rigid and stale and you may never find yourself back to your former self.
I think this is a reason to avoid a disproportionate emphasis on longtermism projects. Because longtermism efficacy is inherently more difficult to calculate with confidence, it can become quite easy to forget how to provide utility quickly and confidently. Basically: if you read enough AI doomposts, you might forget how to build a malaria net.
- Discount Factor Concern: This one is simple. Future life is less likely to exist than current life. I understand the irony here, since longtermism projects seek to make it more likely that future life exists. But inherently you just have to discount the utility of each individual future life. In the aggregate, there's no question that the utility gains are still enormous. But each individual life should have some discount based on this less-likely-to-exist factor.
Anyway, those are my thoughts. I hope that they provide some benefits to the community. And I do greatly appreciate the sacrifices people are making to help others! It's inspiring. Good luck!
- ^
I’m not having anyone edit or review this for me; I’d like all my thoughts and mistakes to be my own.
- ^
Simon has a follow-up post where he discusses a common critique of longtermism: uncertainty. I don't address that critique here, since I find it unpersuasive. It's a concern, sure, but one inherent in any longtermism approach. I think it's best to focus here on ideas that aren't so obvious.
- ^
"Overcorrection" here does not mean that there should never have been a correction. There should have been. I take the focus on longtermism, relative to the focus beforehand, to be a welcome development.
- ^
Picking effective politicians affiliated with the the movement is obviously very important. I'll attribute the choices on that front so far to, uh, growing pains...
- ^
Michael is correct that what helps make EA an attractive ideology is the idea that self-reflection and openness to criticism is healthy for the organization. That is a wonderful principle for an organization/community committed to improving, rather than simply consolidating power for individual actors.
- ^
I don't want to get sidetracked, but I also have to mention that I tend to agree more with this tweet/thread by Alexander Berger than I do with most of Michael's post. Maybe another post, another day.
- ^
If this is wrong, my entire point fails.
- ^
Hot take: lots of EA people think they're playing Ender's Game where (spoiler alert) they actually save humanity in the end.
- ^
There is a related concern here which is that longtermism projects may be easier to get funding for with weak data, a la tech founders and VC firms in the last few years, but I imagine the movement already considers this seriously.
For future submissions to the Red Teaming Contest, I'd like to see posts that are much more rigorously argued than this. I'm not concerned about whether the arguments are especially novel.
My understanding of the key claim of the post is, EA should consider reallocating some more resources from longtermist to neartermist causes. This seems plausible – perhaps some types of marginal longtermist donations are predictably ineffective, or it's bad if community members feel that longtermism unfairly has easier access to funding – but I didn't find the four reasons/arguments given in this post particularly compelling.
The section Political Capital Concern appears to claim: If EA as a movement doesn't do anything to help regular near-term causes, people will think that it's not doing anything to help people, and it could die as a movement. I agree that this is possible (though I also think a "longtermism movement" could still be reasonably successful, though unlikely to have much membership compared to EA.) However, EA continues dedicate substantial resources to near-term causes – hundreds of millions of dollars of donations each year! – and this number is only increasing, as GiveWell hopes to direct 1 billion dollars of donations per year. EA continues to highlight its contributions to near-term causes. As a movement, EA is doing fine in this regard.
So then, if the EA movement as a whole is good in this regard, who should change their actions based on the political capital concern? I think it's more interesting to examine whether local EA groups, individuals, and organizations should have a direct positive impact on near-term causes for signalling reasons. The post only gives the following recommendation (which I find fairly vague): "Instead, the thought is: when running your utility models, factor this in however you can. Consider that utility translated from EA resources to present life, when done effectively and messaged well, [4] redounds as well on the gains to future life." However, rededicating resources from longtermism to neartermism has costs to the longtermist projects you're not supporting. How do we navigate these tradeoffs? It would have been great to see examples for this.
The "Social Capital Concern" section writes:
This might be true for some people, but I think for most EAs, concrete or near-term ways of helping people has a stronger emotional appeal, all else equal. I would find the inverse of the sentence a lot more convincing, to be honest: "focusing on near-term problems is probably way more fun than ones in the distant future. Near-term projects seem inherently more appealing and helpful, grounded in present-day realities."
Longtermist projects may be cool, and their utility may be more theoretical than near-term projects, but I'm extremely confused what you mean when they don't involve getting your hands dirty (in a way such that near-termist work, such as GiveWell's charity effectiveness research, involves more hands-on work). Effective donations have historically been the main neartermist EA thing to do, and donating is quite hands-off.
This seems likely, and thanks for raising this critique (especially if it hasn't been highlighted before), but what should we do about it? The red-teaming contest is looking for constructive and action-relevant critiques, and I think it wouldn't be that hard to take some time to propose suggestions. The action implied by the post is that we should consider shifting more resources to near-termism, but I don't think that would necessarily be the right move, compared to, e.g., being more thoughtful about social dynamics and making an effort to welcome neartermist perspectives.
The section on Muscle Memory Concern writes:
I don't know, even the most meta of longtermist projects, such as longtermist community building (or to go even another meta level, support for longtermist community building), is quite grounded in metrics and have short feedback loops, such that you can tell if your activities are having an impact – if not impact on the utility across all time, then at least something tangible, such as high-impact career transitions. I think the skills would transfer fairly well over to something more near-termist, such as community organizing for animal welfare, or running organizations in general. In contrast, if you're doing charity effectiveness research, whether near-termist or longtermist, it can be hard to tell if your work is any good. Over time, I think that now that we have more EAs getting their hands dirty with projects instead of just earning to give, as a community, we have more experience to be able to execute projects, whether longtermist or near-termist.
As for the final section, the discount factor concern:
I think longtermists are already accounting for the fact that we should discount future people by their likelihood to exist. That said, longtermist expected utility calculations are often more naive than they should be. For example, we often wrongly interpret reducing x-risk reduction from one cause by 1% as reducing x-risk as a whole by 1%, or conflate a 1% x-risk reduction this century with a 1% x-risk reduction across all time.
(I hope you found this comment informative, but I don't know if I'll respond to this comment, as I already spent an hour writing this and don't know if it was a good use of my time.)
I'm really sorry that my comment was harsher than I intended. I think you've written a witty and incisive critique which raises some important points, but I had raised my standards since this was submitted to the Red Teaming Contest.