Quantum physicist Michael Nielsen has published a powerful critical essay about EA. 

Summary:

Long and rough notes on Effective Altruism (EA). Written to help me get to the bottom of several questions: what do I like and think is important about EA? Why do I find the mindset so foreign? Why am I not an EA? And to start me thinking about: what do alternatives to EA look like? The notes are not aimed at effective altruists, though they may perhaps be of interest to EA-adjacent people. Thoughtful, informed comments and corrections welcome (especially detailed, specific corrections!) - see the comment area at the bottom.

Some passages I highlighted:

I have EA friends who donate a large fraction of their income to charitable causes. In some cases it's all their income above some fairly low (by rich developed world standards) threshold, say $30k. In some cases it seems plausible that their personal donations are responsible for saving dozens of lives, helping lift many people out of poverty, and preventing many debilitating diseases, often in some of the poorest and most underserved parts of the world. Some of those friends have directly helped save many lives. That's a simple sentence, but an extraordinary one, so I'll repeat it: they've directly helped save many lives.

 

As extraordinary as my friend's generosity was, there is something further still going on here. Kravinsky's act is one of moral imagination, to even consider donating a kidney, and then of moral conviction, to follow through. This is an astonishing act of moral invention: someone (presumably Kravinsky) was the first to both imagine doing this, and then to actually do it. That moral invention then inspired others to do the same. It actually expanded the range of human moral experience, which others can learn from and then emulate. In this sense a person like Kravinsky can be thought of as a moral pioneer or moral psychonaut, inventing new forms of moral experience.

 

Moral reasoning, if taken seriously and acted upon, is of the utmost concern, in part because there is a danger of terrible mistakes. The Nazi example is overly dramatic: for one thing, I find it hard to believe that the originators of Nazi ideas didn't realize that these were deeply evil acts. But a more everyday example, and one which should give any ideology pause, is overly self-righteous people, acting in what they "know" is a good cause, but in fact doing harm. I'm cautiously enthusiastic about EA's moral pioneering. But it is potentially a minefield, something to also be cautious about.

 

when EA judo is practiced too much, it's worth looking for more fundamental problems. The basic form of EA judo is: "Look, disagreement over what is good does nothing directly to touch EA. Indeed, such disagreement is the engine driving improvement in our notion of what is good." This is perhaps true in some God's-eye, omniscient, in-principle philosopher's sense. But EA community and organizations are subject to fashion and power games and shortcomings and biases, just like every other community and organization. Good intentions alone aren't enough to ensure effective decisions about effectiveness. And the reason many people are bothered by EA is not that they think it's a bad idea to "do good better". But rather that they doubt the ability of EA institutions and community to live up to the aspirations.

These critiques can come from many directions. From people interested in identity politics I've heard: "Look, many of these EA organizations are being run by powerful white men, reproducing existing power structures, biased toward technocratic capitalism and the status quo, and ignoring many of the things which really matter." From libertarians I've heard: "Look, EA is just leftist collective utilitarianism. It centralizes decision-making too much, and ignores both price signals and the immense power that comes from having lots of people working in

their own self-interest, albeit inside a system designed so that self-interest (often) helps everyone collectively ." From startup people and inventors I've heard: "Aren't EAs just working on public goods? If you want to do the most good, why not work on a startup instead? We can just invent and scale new technology (or new ideas) to improve the world!" From people familiar with the pathologies of aging organizations and communities, I've heard: "Look, any movement which grows rapidly will also start to decay. It will become dominated by ambitious careerists and principal agent problems, and lose the sincerity and agility that characterized the pioneers and early adopters"

All these critiques have some truth; they also have significant issues. Without getting into those weeds, the immediate point is that they all look like "merely" practical problems, for which EA judo may be practiced: "If we're not doing that right, we shall improve, we simply need you to provide evidence and a better alternative". But the organizational patterns are so strong that these criticisms seem more in-principle to me. Again: if your social movement "works in principle" but practical implementation has too many problems, then it's not really working in principle, either. The quality "we are able to do this effectively in practice" is an important (implicit) in-principle quality.

 

I've heard several EAs say they know multiple EAs who get very down or even depressed because they feel they're not having enough impact on the world. As a purely intellectual project it's fascinating to start from a principle like "use reason and evidence to figure out how do the most good in the world" and try to derive things like "care for children" or "enjoy eating ice cream" or "engage in or support the arts" as special cases of the overarching principle. But while that's intellectually interesting, as a direct guide to living it's a terrible mistake. The reason to care for children (etc) isn't because it helps you do the most good. It's because we're absolutely supposed to care for our children. The reason art and music and ice cream matter aren't because they help you do the most good. It's because we're human beings – not soulless automatons – who respond in ways we don't entirely understand to things whose impact on our selves we do not and cannot fully apprehend.

Now, the pattern that's been chosen by EA has been to insert escape clauses. Many talk about having a "warm fuzzies" budget for "ineffective" giving that simply makes them feel good. And they carve out ad hoc extension clauses like the one about having children or setting aside an ice cream budget or a dinner budget, and so on. It all seems to me like special pleading at a frequency which suggests something amiss. You've started from a single overarching principle that seems tremendously attractive. But now you've either got to accept all the consequences, and make yourself miserable. Or you have to start, as an individual, grafting on ad hoc extension clauses.

 

EA is an inspiring meaning-giving life philosophy. It invites people to strongly connect with some notion of a greater good, to contribute to that greater good, and to make it central in their life. EA-in-practice has done a remarkable amount of direct good in the world, making people's lives better. It's excellent to have the conversational frame of "how to do the most good" readily available and presumptively of value. EA-in-practice also provides a strong community and sense of belonging and shared values for many people. As moral pioneers EA is providing a remarkable set of new public goods.

All this makes EA attractive as a life philosophy, providing orientation and meaning and a clear and powerful core, with supporting institutions. Unfortunately, strong-EA is a poor life philosophy, with poor boundaries that may cause great distress to people, and underserves core needs. EA-in-practice is too centralized, too focused on absolute advantage; the market often does a far better job of providing certain kinds of private (or privatizable) good. However, EA-in-practice likely does a better job of providing certain kinds of public good than do many existing institutions. EA relies overmuch on online charisma: flashy but insubstantial discussion of topics like the simulation argument and x-risk and AI safety have a tendency to dominate conversation, rather than more substantial work. (This does not mean there aren't good discussions of such topics.) EA-in-practice is too allied with existing systems of power, and does little to question or change them. Appropriating the term "effective" is clever marketing and movement-building, but intellectually disingenuous. EA views illegibility as a problem to be solved, not as a fundamental condition. Because of this it does poorly on certain kinds of creative and aesthetic work. Moral utilitarianism is a useful but limited practical tool, mistaking quantification that is useful for making tradeoffs with a fundamental fact about the world.

I've strongly criticized EA in these notes. But I haven't provided a clearly and forcefully articulated alternative. It amounts to saying that someone's diet of ice cream and chocolate bars isn't ideal, without providing better food; it may be correct, but isn't immediately actionable. Given the tremendous emotional need people have for a powerful meaning-giving system, I don't expect it to have much impact on those people. It's too easy to arm-wave the issues away, or ignore them as things which can be resolved by grafting some exception clauses on. But writing the notes both helped me better understand why I'm not EA, and also why I think the EA principle would, with very considerable modification, make a valuable part of some larger life philosophy. But I don't yet understand what that life philosophy is.

211

New Comment
49 comments, sorted by Click to highlight new comments since: Today at 8:23 AM

The Michael Neilsen critique seems thoughtful, constructive, and well-balanced on first read, but I have some serious reservations about  the underlying ethos and its implications.

Look, any compelling new world-view that is outside the mainstream cultures' Overton window can be pathologized as an information hazard that makes its believers feel unhappy, inadequate, and even mentally ill by mainstream standards. Nielsen seems to view 'strong EA' as that kind of information hazard, and critiques it as such.

Trouble is, if you understand that most normies are delusional about some important issue, and you you develop some genuinely deeper insights into that issue, the psychologically predictable result is some degree of alienation and frustration. This is true for everyone who has a religious conversion experience. It's true for everyone who really takes onboard the implications of any intellectually compelling science -- whether cosmology, evolutionary biology, neuroscience, signaling theory, game theory, behavior genetics, etc. It's true for everyone who learns about any branch of moral philosophy and takes it seriously as a guide to action.

I've seen this over, and over, and over in my own field of evolutionary psychology. The usual 'character arc' of ev psych insight is that (1) you read Dawkins or Pinker or Buss, you get filled with curiosity about the origins of human nature, (2) you learn some more and you feel overwhelming intellectual awe and excitement about the grandeur of evolutionary theory, (3) you gradually come to understand that every human perception, preference, value, desire, emotion, and motivation has deep evolutionary roots beyond your control, and you start to feel uneasy, (4) you ruminate about how you're nothing but an evolved robot chasing reproductive success through adaptively self-deceived channels, and you feel some personal despair, (5)  you look around at a society full of other self-deceived humans unaware of their biological programming, and you feel black-pilled civilizational despair, (6) you live with the Darwinian nihilism for a few years, adapt to the new normal, and gradually find some way to live with the new insights, climbing your way back into some semblance of normie-adjacent happiness. I've seen these six phases many times in my own colleagues, grad students, and collaborators.

And that's just with a new descriptive world-view about how the human world works. EA's challenge can be even more profound, because it's not just descriptive, but normative, or at least prescriptive. So there's a painful gap between what we could be doing, and what we are doing. And so there should be, if you take the world in a morally serious way. 

I think the deeper problem is that given 20th century history, there's a general dubiousness about any group of people who do take the world in a morally serious way that deviates from the usual forms of mild political virtue signaling encouraged in our current system of credentialism, careerism, and consumerism. 

I think even on EA's own terms (apart from any  effects from EA being fringe) there's a good reason for EAs to be OK with being more stressed and unhappy than people with other philosophies.

On the scale of human history we're likely in an emergency situation when we have an opportunity to trade off the happiness of EAs for enormous gains in total well-being. Similar to how during a bear attack you'd accept that you won't feel relaxed and happy while you try to mitigate the attack, but this period of stress is worth it overall. This is especially true if you believe we're in the hinge of history. 

In contrast to a bear attack, you don't expect to know that the "period of stress" has ended during your lifetime. Which raises a few questions, like "Is it worth it?" and "How sure can we be that this really is a stress period?". The thought that we especially are in a position to trade our happiness for enormous gains for society - while not impossible - is dangerous in that it's very appealing, regardless whether it's true or not.

The thought that we especially are in a position to trade our happiness for enormous gains for society [...] is dangerous in that it's very appealing,

I'm not denying that what you say is true, but on the face of it, "the appeal of this ideology is that you have to sacrifice a lot for others' gain" is not an intuitively compelling message. 

In contrast to a bear attack, you don't expect to know that the "period of stress" has ended during your lifetime.

I expect to know this. Either AI will go well and we'll get the glorious transhuman future, or it'll go poorly and we'll have a brief moment of realization before we are killed etc. (or more realistically, a longer moment of awareness where we realize all is truly and thoroughly lost, before eventually the nanobots orwhatever come for us).



 

By many numbers AI risk being solved would only reduce total probability of X-risk by 1/3, 2/3, or maybe 9/10 if you are very heavy on AI-risk probability. 

 

Personally I think humanity's "period of stress" will take at least 1000s of years to be solved but I might be being quite pessimistic. Of course situations will get better but I think world will still be "burning" for quite some time.

Here's a common belief in these circles (which I share):

If AI risk is solved through means other than "we collectively coordinate to not build TAI"(a solution which I think is unlikely both because that level of global coordination is very hard and  because the opportunity costs are massive), then soon after, whether human civilization flourishes or not is mostly a question that's out of human hands. 

Eh this logic can be used to justify a lot of extreme action in the name of progress. Communists and Marxists have had a lot of thoughts about the "hinge of history" and used that to unleash terrible destruction on the rest of humanity.

This is completely unrelated to the great point you made with the comment but I felt I had to share a classic? EA tip that worked well for me. (uncertain how much this counts as a classic.) I got to the nice nihilistic bottom of realising that my moral system is essentially based on evolution but I reversed that within a year by reading a bunch of Buddhist philosophy and by meditating. Now it's all nirvana over here! (try it out now...)

https://www.lesswrong.com/posts/Mf2MCkYgSZSJRz5nM/a-non-mystical-explanation-of-insight-meditation-and-the

https://www.lesswrong.com/posts/WYmmC3W6ZNhEgAmWG/a-mechanistic-model-of-meditation

https://www.lesswrong.com/s/ZbmRyDN8TCpBTZSip

I wrote a long twitter thread with some replies here FWIW: https://twitter.com/albrgr/status/1532726108130377729

I loved this bit: "comfortable modernity is consistent with levels of altruistic impact and moral seriousness that we might normally associate with moral heroism"

It's a good thread, and worth a look!

I think this maybe has the neoliberal flavor that Srinivasan objected to, but on my better days, it just seems good - better for those people who can wholeheartedly say “I can carry this load” to do so, and to be able help others materially.

I find this tweet interesting - because rather than being neoliberal, the second half exemplifies the Marxist idea of "From each according to his ability, to each according to his needs."

I do think EA is too neoliberal, but IMO this isn't it :)

Correct me if I'm wrong--

I think part of what you're saying is that EA has innovated in bringing philanthropy down to a much 'lower' level. It's not just billionaires that can do it. If we look across at other societies with less developed economies, there are plenty of people with worse problems than our own. Even as a middle-class professional in the US, there is plenty of good you can do with a little bit of giving.

Maybe part of the innovation is bringing this to a (largely, I assume) secular community and taking an international perspective? I think of church communities and mutual aid organizations as having done this for many years on highly localized and personalized scales.

Also, re this: "yes, this set of self-proclaimed altruists isn’t having as much fun as they could be or other people are, that’s correct, and an intentional tradeoff they're making in pursuit of their moral goals."

Are we just talking about the survivors' guilt of being born in an advanced capitalist society, benefitting from hundreds of years of imperialist exploitation of other parts of the world?

 

[+][comment deleted]6mo 1

The "misery trap" section feels like it is describing a problem that EA definitely had early on, but mostly doesn't now?

In early EA, people started thinking hard about this idea of doing the most good they could. Naively, this suggests doing things like giving up things that are seriously important to you (like having children), illegibily make you more productive (like a good work environment), or provide important flexibility (like having free time), and the author quotes some early EAs struggling with this conflict, like:

> my inner voice in early 2016 would automatically convert all money I spent (eg on dinner) to a fractional “death counter” of lives in expectation I could have saved if I’d donated it to good charities. Most EAs I mentioned that to at the time were like ah yeah seems reasonable

I really don't think many EAs would say "seems reasonable" now. If someone said this to me I'd give some of my personal history with this idea and talk about how it turns out that in practice this works terribly for people: it makes you miserable, very slightly increases how much you have available to donate, and massively decreases your likely long-term impact through burnout, depression, and short-term thinking.

One piece of writing that I think was helpful in turning this around was http://www.givinggladly.com/2013/06/cheerfully.html Another was https://www.benkuhn.net/box/

I think it's not a coincidence that the examples the author links are 5+ years old? If people are still getting caught in this trap, though, I'd be interested to see more? (And potentially write more on why it's not a good application of EA thinking.)

In case people are curious: Julia and I now have three kids and it's been 10+ years since stress and conflict about painful trade-offs between our own happiness and making the world better were a major issue for us.

I think this has gotten better, but not as much better as you would hope considering how long EAs have known this is a problem, how much they have discussed it being a problem, and how many resources have gone into trying to address it. I think there's actually a bit of an unfortunate fallacy here that it isn't really an issue anymore because EA has gone through the motions to address it and had at least some degree of success, see Sasha Chapin's relevant thoughts:

https://web.archive.org/web/20220405152524/https://sashachapin.substack.com/p/your-intelligent-conscientious-in?s=r

Some of the remaining problem might come down to EA filtering for people who already have demanding moral views and an excessively conscientious personality. Some of it is probably due to the "by-catch" phenomenon the anon below discusses that comes with applying expected value reasoning to having a positively impactful career (still something widely promoted, and probably for good reason overall). Some of it is this other, deeper tension that I think Nielson is getting at:

Many people in Effective Altruism (I don't think most, but many, including some of the most influential) believe in a standard of morality that is too demanding for it to be realistic for real people to reach it. Given the prevalence of actualist over possiblist reasoning in EA ethics, and just not being totally naive about human psychology, pretty much everyone who does believe this is onboard with compartmentalizing do-gooding or do-besting from the rest of their life. The trouble runs deeper than this unfortunately though, because once you buy an argument that letting yourself have this is what will be best for doing good overall, you are already seriously risking undermining the psychological benefits.

Whenever you do something for yourself, there is a voice in the back of your head asking if you are really so morally weak that this particular thing is necessary. Even if you overcome this voice, there is a worse voice that instrumentalizes the things you do for yourself. Buying icecream? This is now your "anti-burnout icecream". Worse, have a kid (if you, like in Nielson's example, think this isn't part of your best set of altruistic decisions), this is your "anti-burnout kid".

It's very hard to get around this one. Nielson's preferred solution would clearly be that people just don't buy this very demanding theory of morality at all, because he thinks that it is wrong. That said, he doesn't really argue for this, and for those of us who actually do think that the demanding ideal of morality happens to be correct, it isn't an open avenue for us.

The best solution as far as I can tell is to distance your intuitive worldview from this standard of morality as much as possible. Make it a small part of your mind, that you internalize largely on an academic level, and maybe take out on rare occasions for inspiration, but insist on not viewing your day to day life through it. Again though, the trickiness of this, I think, is a real part of the persistence of some of this problem, and I think Nielson nails this part.

(edited on 10/24/22 to replace broken link)

Throwaway account to give a vague personal anecdote. I agree this has gotten better for some, but I think this is still a problem (a) that new people have to work out for themselves, going through the stages on their own, perhaps faster than happened 5 years ago; (b) that hits people differently if they are “converted” to EA but not as successful in their pursuit of impact. These people are left in a precarious psychological position.

I experienced both. I think of myself as “EA bycatch.” By the time I went through the phases of thinking through all of this for myself, I had already sacrificed a lot of things in the name of impact that I can’t get back (money, time, alternative professional opportunities, relationships, etc). Frankly some things got wrecked in my life that can’t be put back together. Being collateral damage for the cause feels terrible, but I really do hope the work brings results and is worth it.

Somehow, that givinggladly.com link is broken for me. Here is an archived version: https://web.archive.org/web/20220412232153/http://www.givinggladly.com/2013/06/cheerfully.html 

Passage 5 seems to prove too much, in the sense of "If you take X philosophy literally, it becomes bad for you" being applicable to most philosophies, but I very much like Passage 4, the EA judo one.

While it is very much true that disagreeing over the object-level causes shouldn't disqualify one from EA, I do agree that it is not completely separate from EA - that EA is not defined purely by its choice of causes, but neither does it stand fully apart from them. EA is, in a sense, both a question and an ideology, and trying to make sure the ideology part doesn't jump too far ahead of the question part is important. 

"Again: if your social movement "works in principle" but practical implementation has too many problems, then it's not really working in principle, either. The quality "we are able to do this effectively in practice" is an important (implicit) in-principle quality."

I think this is a very key thing that many movements, including EA, should keep in mind. I think that what EA should be aiming for is "EA has some very good answers to the question of how we can do the most good, and we think they're the best answers humanity has yet come up with to answer the question. That's different from thinking our answers are objectively true, or that we have all the best answers and there are none left to find." We can have the humility to question ourselves, but still have the confidence to suggest our answers are good ones.

I dream of a world where EA is to doing good as science is to human knowledge. Science isn't always right, and science has been proven wrong again and again in the past, but science is collectively humanity's best guess. I would like for EA to be humanity's best guess at how to do the most good. EA is very young compared to science, so I'm not surprised we don't have that same level of mastery over our field as science does, but I think that's the target.

Thank you Jay, this is such a great response, I especially liked this paragraph:

While it is very much true that disagreeing over the object-level causes shouldn't disqualify one from EA, I do agree that it is not completely separate from EA - that EA is not defined purely by its choice of causes, but neither does it stand fully apart from them. EA is, in a sense, both a question and an ideology, and trying to make sure the ideology part doesn't jump too far ahead of the question part is important. 

Also, to me I think EA is essentially about applying the scientific revolution to the realm of doing good (as a branch of science of sorts).

These are interesting critiques and I look forward to reading the whole thing, but I worry that the nicer tone of this one is going to lead people to give it more credit than critiques that were at least as substantially right, but much more harshly phrased.

The point about ideologies being a minefield, with Nazis as an example, particularly stands out to me. I pattern match this to the parts of harsher critiques that go something like "look at where your precious ideology leads when taken to an extreme, this place is terrible!" Generally, the substantial mistake these make is just casting EA as ideologically purist and ignoring the centrality of projects like moral uncertainty and worldview diversification, as well as the limited willingness of EAs to bite bullets they in principle endorse much of the background logic of (see Pascal's Mugging and Ajeya Cotra's train to crazy town).

By not getting into telling us what terrible things we believe, but implying that we are at risk of believing terrible things, this piece is less unflattering, but is on shakier ground. It involves this same mistake about EA's ideological purism, but on top of this has to defend this other higher level claim rather than looking at concrete implications.

Was the problem with the Nazis really that they were too ideologically pure? I find it very doubtful. The philosophers of the time attracted to them generally were weird humanistic philosophers with little interest in the types of purism that come from analytic ethics, like Heidegger. Meanwhile most philosophers closer to this type of ideological purity (Russell, Carnap) despised the Nazis from the beginning. The background philosophy itself largely drew from misreadings of people like Nietzsche and Hegel, popular anti-semitic sentiment, and plain old historical conspiracy theories. Even at the time intellectual critiques of Nazis often looked more like "they were mundane and looking for meaning from charismatic, powerful men" (Arendt) or "they aesthetisized politics" (Benjamin) rather than "they took some particular coherent vision of doing good too far".

The truth is the lesson of history isn't really "moral atrocity is caused by ideological consistency". Occasionally atrocities are initiated by ideologically consistent people, but they have also been carried out casually by people who were quite normal for their time, or by crazy ideologues who didn't have a very clear, coherent vision at all. The problem with the Nazis, quite simply, is that they were very very badly wrong. We can't avoid making the mistakes they did from the inside by pattern matching aspects of our logic onto them that really aren't historically vindicated, we have to avoid moral atrocity by finding more reliable ways of not winding up being very wrong.

These are interesting critiques and I look forward to reading the whole thing, but I worry that the nicer tone of this one is going to lead people to give it more credit than critiques that were at least as substantially right, but much more harshly phrased.

I agree there's such a risk. But I also think that the tone actually matters a lot.

To be clear, I also agree with this.

I just wanted to say that, as of Sat afternoon PST, EA seems to be holding up very very well on HN.

The top comments are currently:

  1. JeffK: https://news.ycombinator.com/item?id=31619037
  2. Balanced piece about over demandingness and JSM: https://news.ycombinator.com/item?id=31618941
  3. Technical comment about kidneys, which is nice object level discussion: https://news.ycombinator.com/item?id=31619775
  4. Piece that says "This actually makes me more interested in EA, because the criticism only really chips away small caveats in the idea it presents" https://news.ycombinator.com/item?id=31619831

Hacker news has this nice mixture: high signal to noise, being ruthless to hacks, and senior anonymous people who can excavate very inconvenient content. 

With this context, the current discussion and thoughts seems like a genuinely good signal. 

Like, it's more than just nice PR, but goes a bit farther and amounts to almost a good review.

Someone I know speaks to some pretty nice, reasonable people. Something that person hears, in private, high-trust situations, about EA is that "it's too demanding" and "I don't want to give up my life to charity, man". 

So very talented, virtuous, people are saying they don't want to engage in EA not because of bad values, bad epistemics, or culture, but because it's just seems too much.

This is not a defect, I'm saying it's sort of the opposite. 

EA gets a lot of criticism  like "the people fell in love with AI" and "too much measurement", "we just need to start the left/libertarian revolution and the markets/democracy will fix everything". 

But what if these ideas are just the chaff that appears online? 

What if 50% of “criticism” (or maybe 95% of the potential population, weighted by potential contribution) amounts to mundane yet important things like, "Hey, this seems good, but I don't want to give up my series A to figure out how to contribute."[1]

That's not something a lot of people will write publicly, especially people who don't have a philosophical bent or a culture of writing online, and work 50+ hours in pretty demanding jobs.

  1. ^

    There's issues about dilution here and I'm not saying EA should try to get most or even 10% of these people. Even a small fraction of these people a huge amount of talent. 

    Many of these people aren't ideological, materialistic, or selfish, it's more like, "Wow, this seems like a lot, and I don't know how to engage."

Thanks for link-posting, I enjoyed this!

I didn't understand the section about EA being too centralized and focused on absolute advantage. Can anyone explain? 

EA-in-practice is too centralized, too focused on absolute advantage; the market often does a far better job of providing certain kinds of private (or privatizable) good. However, EA-in-practice likely does a better job of providing certain kinds of public good than do many existing institutions.

And footnote 11: 

It's interesting to conceive of EA principally as a means of providing public goods which are undersupplied by the market. A slightly deeper critique here is that the market provides a very powerful set of signals which aggregate decentralized knowledge, and help people act on their comparative advantage. EA, by comparison, is relatively centralized, and focused on absolute advantage. That tends to centralize people's actions, and compounds mistakes. It's also likely a far weaker resource allocation model, though it does have the advantage of focusing on public goods. I've sometimes wondered about a kind of "libertarian EA", more market-focused, but systematically correcting for well-known failures of the market.

Don't global health charities provide private goods (bed nets, medicine) that markets cannot? Markets only supply things people will pay for and poor people can't pay much. 

X-risk reduction seems like a public good, and animal welfare improvements are either a public good of a private good where the consumers definitely cannot pay. 

I take it that centralization is in contrast to markets. But it seems like in a very real way EA is harnessing markets to provide these things. EA-aligned charities are competing in a market to provide QALYs as cheaply as possible, since EAs will pay for them. EAs also seem very fond of markets generally (ex: impact certificates, prediction markets). 

How is EA focused on absolute advantage? Isn't earning to give using one's relative advantage? 

I didn't understand the section about EA being too centralized and focused on absolute advantage. Can anyone explain? 

EA-in-practice is too centralized, too focused on absolute advantage; the market often does a far better job of providing certain kinds of private (or privatizable) good. However, EA-in-practice likely does a better job of providing certain kinds of public good than do many existing institutions.

It's interesting to conceive of EA principally as a means of providing public goods which are undersupplied by the market. A slightly deeper critique here is that the market provides a very powerful set of signals which aggregate decentralized knowledge, and help people act on their comparative advantage. EA, by comparison, is relatively centralized, and focused on absolute advantage. That tends to centralize people's actions, and compounds mistakes. It's also likely a far weaker resource allocation model, though it does have the advantage of focusing on public goods. I've sometimes wondered about a kind of "libertarian EA", more market-focused, but systematically correcting for well-known failures of the market.↩

 

 

I think he's saying something like:

Imagine the Soviet Union producing one model of dishwashers for all its people. We would expect them to do a terrible job. This is even true if the Soviet Union dedicates teams of specialists, like literally a team of 200 engineers, designers, manufacturers, and marketers. It's obvious that isn't enough, and it would be a stilted final product. In the US and market economies, many companies rise and fall making dishwashers. The innovation makes for a much better product.

So, by having an intervention focused on provisioning malaria nets, Give Well is making the same error. I think "absolute advantage" here refers to focusing obtaining and distributing the "most and cheapest nets to the most people". This is in place of a more flexible solution, less top down, which would be more effective somehow.  Maybe even more, the writer is saying that, GiveWell's entire team, by focusing on "maximizing good" in "one office", is committing the same sin as the Soviet planners. I think this is close to or saying that having that "one cost effective spreadsheet is bad".

I don't think this is a good take. 

It's just wrong. We can't actually provision hundreds of millions of dollars of mosquito nets a better way, or one that would be close to cost efficient, by say, making a bounty in each of the malaria plagued countries or something. 

There are many other sets of interventions which require a lot of coordination and where social, political and other capital is essential to setup. Like, there's no way a major AI safety org would be produced and handle all of the complexities.

Also, as you point out, almost all EA activity involving spending uses market methods.

 

The other content "A slightly deeper critique here is that the market provides a very powerful set of signals which aggregate decentralized knowledge, and help people act on their comparative advantage..." Is not good. 

To be specific, they recap arguments about markets (e.g. libertarian, "Road to Serfdom" style) in theoretical ways that pattern match to something that is ideological (and sort of basic, like dinner party talk). Mentioning externalities and public goods doesn't help. 

The issue is that this colors their entire essay—thinking the market can solve most problems of the world, to the degree, that a focused, talented team trying to find a new global health problem is stilted or marginal, is a really strong statement. 

It's hard to trust that this same model of the world has anything robust to say about much more complicated and esoteric things, like EA or AI safety.

I agree with your basic interpretation, but I think a different conclusion can be drawn. You're framing the argument as saying "A completely centralised government is bad, so we should use markets to do these things". Then you explain why markets alone can't really provide the things people need.

This is an argument on why not to jump from full centralisation to full libertarian decentralisation. But as in economic policy, there's vast middle ground. One can, as in social democracy, have that centralised guiding organisation but still rely on the power of markets as a tool.

In practice this means, for example, looking for opportunities like AMF where collective organisation allows for much more efficient allocation of goods than would otherwise be possible; but also giving a substantial part of the resources through GiveDirectly for other goods to be allocated more dynamically by the market. Which is someone "we" (i.e. GiveWell) don't currently do.

Thanks for posting this. I also appreciated this thoughtful essay.

There was also this passage (not in your excerpts):

An alternate solution, and the one that has, I believe, been adopted by many EAs, has been a form of weak-EA. Strong-EA takes "do the most good you can do" extremely seriously as a central aspect of a life philosophy. Weak-EA uses that principle more as guidance. Donate 1% of your income. Donate 10% of your income, provided that doesn't cause you hardship. Be thoughtful about the impact your work has on the world, and consult many different sources. These are all good things to do! The critique of this form is that it's fine and good, but also hard to distinguish from the common pre-existing notion many people have, "live well, and try to do some good in the world".

My emphasis. (I'm not quite sure whether Nielsen endorses this - see his comment further down.)

I wouldn't agree with that. I think that one can increase one's impact very substantially over the baseline referred to in the bold sentence by, e.g. working directly on a high-impact cause, even if one doesn't donate a large fraction of one's income. 

(Fwiw I also wouldn't call that weak-EA.)

Right. Donating 10-50% of time or resources as effectively as possible is still very distinctive, and not much less effective than donating 100%.

and not much less effective than donating 100%


Wouldn't it be roughly a tenth to half as effective?

Whereas choosing the wrong cause could cost orders of magnitude.

Fwiw, I think the logic is very different when it comes to direct work, and that phrasing it in terms of what fraction of one's time one donates isn't the most natural of thinking about it.

You can usually relatively straightforwardly divide your monetary resources into a part that you spend on donations and a part that you spend for personal purposes.

By contrast, you don't usually spend some of your time at work for self-interested purposes and some for altruistic purposes. (That is in principle possible, but uncommon among effective altruists.) Instead you only have one job (which may serve your self-interested and altruistic motives to varying degrees). Therefore, I think that analogies with donations are often a stretch and sometimes misleading (depending on how they're used).

[Cross-posting from the comments]

I want to express two other intuitions that make me very skeptical of some alternatives to effective altruism.

As you write:

"By their fruits ye shall know them" holds for intellectual principles, not just people. If a set of principles throws off a lot of rotten fruit, it's a sign of something wrong with the principles, a reductio ad absurdum.

I really like this thought. We might ask: what are the fruits of ineffective altruism? Admittedly, much good. But also the continued existence, at scale, of extreme poverty, factory farming, low-probability high-impact risks, and threats to future generations, long after many of these problems could have been decimated if there had been widespread will to act.

That's a lot of rotten fruit.

In practice, by letting intuition lead, ineffective altruism systematically overlooks distant problems, leaving their silent victims to suffer and die in masses. So even if strong versions of EA are too much, EA in practice looks like desperately needed corrective movement. Or at least, we need something different from whatever allows such issues to fester, often with little opposition.

Lastly, one thing that sometimes feels a bit lost in discussions of effectiveness: "effective" is just half of it. Much of what resonates is the emphasis on altruism (which is far from unique, but also far from the norm)--on how, in a world with so much suffering, much of our lives should be oriented around helping others.

Nielsen's critique is humble, well-written, and actually quite compelling. I also encourage people not to skip the footnotes.

I'm still relatively new to the large body of historic EA discussion, so I apologize in advance for retreading any ground that the community has already covered.

Recently I've been thinking more and more about the idea that individual altruism is simply not a scalable and sustainable model to improve the world. We have to achieve systemic change that re-aligns incentives across society. I sense a little bit of this between the lines of this article.

There really is nothing an individual can do alone in terms of personal sacrifice of resources that will fix the world simply with that transfer of resources. What we need is systems of government that redistribute resources at scale, taking the burden of such choice away from individuals. Besides this being the only way to alleviate human suffering at scale, it's also the only way to reliably account for externalities.

Imagine an anarchic society that relied solely on individual altruism to help the needy. Would we sit here debating how much individuals should give, or would we be advocating for some sort of government to centralize and formalize the process of resource allocation? Similarly, 10% or 20% is not the issue -- it's about fixing a society that has to rely on individual good will rather than a (better) built-in system of redistribution according to need.

Is the "final form" of EA simply radical, much-more-inclusive democracy?

I agree with this message, but I still think EA has something important to contribute to the mindset of systemic change advocates. E.g. scientific thinking, measuring outcomes, checking the accuracy of our beliefs, etc.

Recognizing the existence of systemic problems is far from enough to solve them. We have to carefully analyze how to apply our resources to that, and to make sure we're not letting the poor fall between the cracks in the meantime (or the entire world go extinct).

Is the "final form" of EA simply radical, much-more-inclusive democracy?

I don't think it's possible to reliably predict now the "final form of EA", if such a constant limit even exists. But IMO the inability of any currently existing ideology or social movement to solve the world's problems so far, probably precludes the definition of our aspirations entirely in terms of those.

A useful piece of context. When asked about recommendations on charitable giving Michael Nielsen writes:

Same answer as @TheZvi's earlier, I'm afraid. I'm pretty Hayekian; I wish there were good price signals here! In some ways I view that as what EA is doing: it is trying to use community argument and institutions to price public goods (& the like) appropriately.

(Crossposting)

This is a wonderful critique - I agreed with it much more than I thought I would.

Fundamentally, EA is about two things. The first is a belief in utilitarianism or a utilitarian-esque moral system, that there exists an optimal world we should aspire to. This is a belief I believe to be pretty universal, whether people want to admit it or not.

The second part of EA is the belief that we should try to do as much good as possible. Emphasis on “try” - there is a subtle distinction between “hope to do the most amount of good”(the previous paragraph) and “actively try to do the most amount of good”. This piece points out many ways in which doing the latter does not actually lead to the former. The focus on quantifying impact leads to a male/white community, it leads to a reliance on nonprofits that tend to be less sustainable, it leads to outsourcing of intellectual work to individual decision-makers, etc.

But the question of “does trying to optimize impact actually lead to optimal outcomes?” is just an epistemic one. The critiques mentioned are simply counter-arguments, and there are numerous arguments in favor that many others have made. But this is a question on which we have some actual evidence, and I feel that this piece understates the substantial work that EA has already done. We have very good evidence that GiveWell charities have an order of magnitude higher impact than the average one. We are supporting animal welfare policy that has had some major victories in state referenda. We have good reason to believe AI safety is a horribly neglected issue that we need to work on.

This isn’t just a theoretical debate. We know we are doing better work than the average altruistic person outside the community. Effective Altruism is working.

EA is about two things.

[1] belief in utilitarianism or a utilitarian-esque moral system, that there exists an optimal world we should aspire to.

[2] belief that we should try to do as much good as possible

I would say that is a reasonable descriptive claim about the core beliefs of many in the community (especially more of the hardcore members), but IMHO neither are what "EA is about".

I don't see EA as making claims about what we should do or believe.

I see it as a key question of "how can we do the most good with any given unit of resource we devote to doing good" and then taking action upon what we find when we ask that.

The community and research field have certain tools they often use (e.g. use scientific evidence when it's available, using expected value reasoning) and many people who share certain philosophical beliefs (e.g. that outcomes are morally important) but IMHO these aren't what "EA is about".

I see [EA] as a key question of "how can we do the most good with any given unit of resource we devote to doing good" and then taking action upon what we find when we ask that.

I also consider this question to be the core of EA, and I have said things like the above to defend EA against the criticism that it's too demanding. However, I have since come to think that this characterization is importantly incomplete, for at least two reasons:

  1. It's probably inevitable, and certainly seems to be the case in practice, that people who are serious about answering this question overlap a lot with people who are serious about devoting maximal resources to doing good. Both in the sense that they're often the same people, and in the sense that even when they're different people, they'll share a lot of interests and it might make sense to share a movement.
  2. Finding serious answers to this question can cause you to devote more resources to doing good. I feel very confident that this happened to me, for one! I don't just donate to more effective charities than the version of me in a world with no EA analysis, I also donate a lot more money than that version does. I feel great about this, and I would usually frame it positively - I feel more confident and enthusiastic about the good my donations can do, which inspires me to donate more - but negative framings are available too.

So I think it can be a bit misleading to imply that EA is only about this key question of per-unit maximization, and contains no upwards pressures on the amount of resources we devote to doing good. But I do agree that this question is a great organizing principle.

Thanks Erin! I wouldn't say that EA is only about the key question, I just disagree that utilitarianism and an obligation to maximise are required or 'what EA is about'. I do agree that they are prevalent (and often good to have some upwards pressure on the amount we devote to doing good) 😀 

Is "better than average" that good? Most people and projects with a really high positive impact were not related to EA (even if simply because their impact happened before EA existed). It certainly doesn't seem like "our version of EA" is necessary to have the most impact. Wether it's sufficient, we don't know yet.

(I have crossposted the comments below here.)

Thanks for this article! Below are some comments.

Perhaps we need a Center for Effective Effective Altruism? Or Givewellwell, evaluating the effectiveness of effectiveness rating charities.

Note there are some external evaluations of EA-aligned organisations and recommendations made by them. Some examples:

Again: if your social movement "works in principle" but practical implementation has too many problems, then it's not really working in principle, either. The quality "we are able to do this effectively in practice" is an important (implicit) in-principle quality.

I think this is an important point.

This is a really big problem for EA. When you have people taking seriously such an overarching principle, you end up with stressed, nervous people, people anxious that they are living wrongly. The correct critique of this situation isn't the one Singer makes: that it prevents them from doing the most good. The critique is that it is the wrong way to live.

In practice, it is unclear to me how different the 2 critiques are. I would say doing the most good is most likely not compatible with "living in a wrong way", because too much stress etc. are not good (for yourself or other).

Furthermore, the notion of a single "the" good is also suspect. There are many plural goods, which are fundamentally immeasurable and incommensurate and cannot be combined.

"The" good is a very complex function of reality, but why would it be fundamentally immeasurable and incommensurate?

Indeed, the more illegibility you conquer, the more illegibility springs up, and the greater the need for such work.

I am not sure I fully understand the concept of illegibility, but it does not seem to be much different from knowledge about the unknow. As our knowledge about what was previously unknown increases, knowledge about what is still unknown also increases. Why is this problematic?

Some of what Michael is talking about reminds me of the well known "savior complex" that's fairly common among "change the world" types, a category which would include many EA members methinks. How common is that complex in the EA community? Is there any data or surveys on EA mental health?