Devin Kalish

Hello, I'm Devin, I blog here along with Nicholas Kross. Currently working on a bioethics MA at NYU.

Topic Contributions

Comments

Michael Nielsen's "Notes on effective altruism"

I think this has gotten better, but not as much better as you would hope considering how long EAs have known this is a problem, how much they have discussed it being a problem, and how many resources have gone into trying to address it. I think there's actually a bit of an unfortunate fallacy here that it isn't really an issue anymore because EA has gone through the motions to address it and had at least some degree of success, see Sasha Chapin's relevant thoughts:

https://sashachapin.substack.com/p/your-intelligent-conscientious-in?s=r

Some of the remaining problem might come down to EA filtering for people who already have demanding moral views and an excessively conscientious personality. Some of it is probably due to the "by-catch" phenomenon the anon below discusses that comes with applying expected value reasoning to having a positively impactful career (still something widely promoted, and probably for good reason overall). Some of it is this other, deeper tension that I think Nielson is getting at:

Many people in Effective Altruism (I don't think most, but many, including some of the most influential) believe in a standard of morality that is too demanding for it to be realistic for real people to reach it. Given the prevalence of actualist over possiblist reasoning in EA ethics, and just not being totally naive about human psychology, pretty much everyone who does believe this is onboard with compartmentalizing do-gooding or do-besting from the rest of their life. The trouble runs deeper than this unfortunately though, because once you buy an argument that letting yourself have this is what will be best for doing good overall, you are already seriously risking undermining the psychological benefits.

Whenever you do something for yourself, there is a voice in the back of your head asking if you are really so morally weak that this particular thing is necessary. Even if you overcome this voice, there is a worse voice that instrumentalizes the things you do for yourself. Buying icecream? This is now your "anti-burnout icecream". Worse, have a kid (if you, like in Nielson's example, think this isn't part of your best set of altruistic decisions), this is your "anti-burnout kid".

It's very hard to get around this one. Nielson's preferred solution would clearly be that people just don't buy this very demanding theory of morality at all, because he thinks that it is wrong. That said, he doesn't really argue for this, and for those of us who actually do think that the demanding ideal of morality happens to be correct, it isn't an open avenue for us.

The best solution as far as I can tell is to distance your intuitive worldview from this standard of morality as much as possible. Make it a small part of your mind, that you internalize largely on an academic level, and maybe take out on rare occasions for inspiration, but insist on not viewing your day to day life through it. Again though, the trickiness of this, I think, is a real part of the persistence of some of this problem, and I think Nielson nails this part.

Michael Nielsen's "Notes on effective altruism"

These are interesting critiques and I look forward to reading the whole thing, but I worry that the nicer tone of this one is going to lead people to give it more credit than critiques that were at least as substantially right, but much more harshly phrased.

The point about ideologies being a minefield, with Nazis as an example, particularly stands out to me. I pattern match this to the parts of harsher critiques that go something like "look at where your precious ideology leads when taken to an extreme, this place is terrible!" Generally, the substantial mistake these make is just casting EA as ideologically purist and ignoring the centrality of projects like moral uncertainty and worldview diversification, as well as the limited willingness of EAs to bite bullets they in principle endorse much of the background logic of (see Pascal's Mugging and Ajeya Cotra's train to crazy town).

By not getting into telling us what terrible things we believe, but implying that we are at risk of believing terrible things, this piece is less unflattering, but is on shakier ground. It involves this same mistake about EA's ideological purism, but on top of this has to defend this other higher level claim rather than looking at concrete implications.

Was the problem with the Nazis really that they were too ideologically pure? I find it very doubtful. The philosophers of the time attracted to them generally were weird humanistic philosophers with little interest in the types of purism that come from analytic ethics, like Heidegger. Meanwhile most philosophers closer to this type of ideological purity (Russell, Carnap) despised the Nazis from the beginning. The background philosophy itself largely drew from misreadings of people like Nietzsche and Hegel, popular anti-semitic sentiment, and plain old historical conspiracy theories. Even at the time intellectual critiques of Nazis often looked more like "they were mundane and looking for meaning from charismatic, powerful men" (Arendt) or "they aesthetisized politics" (Benjamin) rather than "they took some particular coherent vision of doing good too far".

The truth is the lesson of history isn't really "moral atrocity is caused by ideological consistency". Occasionally atrocities are initiated by ideologically consistent people, but they have also been carried out casually by people who were quite normal for their time, or by crazy ideologues who didn't have a very clear, coherent vision at all. The problem with the Nazis, quite simply, is that they were very very badly wrong. We can't avoid making the mistakes they did from the inside by pattern matching aspects of our logic onto them that really aren't historically vindicated, we have to avoid moral atrocity by finding more reliable ways of not winding up being very wrong.

Arguments for Why Preventing Human Extinction is Wrong

To be clear, I wasn't trying to imply that Tomasik supports extinction, just that, if I have to think about the strongest case against preventing it, it's the sort of Tomasik on my shoulder that is speaking loudest.

Arguments for Why Preventing Human Extinction is Wrong

"Tomasikian" refers to the Effective Altruist blogger Brian Tomasik, who is known for pioneering an extremely bullet-biting version of "suffering-focused ethics" (roughly negative utilitarianism, though from my readings, he may also mix some preference satisfactionism and prioritarianism in as well). The suffering empathy exercises I'm referring to aren't really a specific thing, but more sort of the style he uses when writing about suffering to try to get people to understand his perspective on it. Usually this involves describing real world cases of extreme suffering, and trying to get people to see the desperation one would feel if they were actually experiencing it, and to take that seriously, and inadequacy of academic dismissals in the face of it. A sort of representative quote:

"Most people ignore worries about medical pain because it's far away. Several of my friends think I'm weird to be so parochial about reducing suffering and not take a more far-sighted view of my idealized moral values. They tend to shrug off pain, saying it's not so bad. They think it's extremely peculiar that I don't want to be open to changing my moral perspective and coming to realize that suffering isn't so important and that other things matter comparably. Perhaps others don't understand what it's like to be me. Morality is not an abstract, intellectual game, where I pick a viewpoint that seems comely and elegant to my sensibilities. Morality for me is about crying out at the horrors of the universe and pleading for them to stop. Sure, I enjoy intellectual debates, interesting ideas, and harmonious resolutions of conflicting intuitions, and I realize that if you're serious about reducing suffering, you do need to get into a lot of deep, recondite topics. But fundamentally it has to come back to suffering or else it's just brain masturbation while others are being tortured."

The relevant post:

https://reducing-suffering.org/the-horror-of-suffering/

Arguments for Why Preventing Human Extinction is Wrong

The big one I can think of, which is related to some of the ones you mention, is leximin or strong enough prioritarianism. The worst off beings human persistence would cause to exist are likely to live net negative lives, possibly very strongly net negative lives if we persist long enough, and on theories like this, benefits to these beings (like preventing their lives) count for vastly more than benefits to better off beings (like by giving those beings good lives rather than no lives). I don’t endorse this view myself, but I think it is the argument that most appeals to me in my moods when I am most sympathetic to extinction. When I sort of inhabit a Tomasikian suffering empathy exercise, and imagine the desperation of the cries of the very worst off being from the future, calling back to me, I can be tempted to decide that rescuing this being in some way is most of what should matter to me.

The Many Faces of Effective Altruism

Thanks for the comment. I agree with most of this, and think that this is one of the major possible costs of labels like this, but I worry that some of these costs get more attention than the subtler costs that come from failing to label groups like this. Take the label of "Effective Altruism" itself for example, the label does mean that people in the movement might have a tendency to rest easy, knowing that their conformity to certain dogmas is shared by "their people", but not using the label would mean sort of willfully ignoring something big that was actually true to begin with about one's social identity/biases/insularity, and hamper certain types of introspection and social criticism.

Even today there are pretty common write ups by people looking to dissolve some aspect of "Effective Altruism" as a group identifier as opposed to a research project or something. This is well meaning, but in my opinion has led to a pretty counterproductive movement-wide motte and bailey often influencing discussions. When selling the movement to others, or defending it from criticism, Effective Altruism is presented as a set of uncontroversial axioms pretty much everyone should agree with, but in practice the way Effective Altruism is discussed and works internally does involve implicit or explicit recognition that the group is centered around a particular network of people and organizations, with their own internal norms, references, and overton window.

I think a certain cost like this, if to a lesser extent, comes from failing to label the real cliques and distinct styles of reasoning and approaches to doing good that to some extent polarize the movement. This is particularly the case for some of the factors I discuss in the post, like the fact that different parts of the movement feel vastly more or less welcoming to some people than others, or that large swaths of the movement may feel like a version of "Effective Altruism" you can identify with, and others aren't, and this makes using the label of Effective Altruism itself less useful. For people who have been involved in different parts of the movement and are comfortable moving between the different subcultures, I would count myself here for instance, this tension may be harder to relate to, but it is a story I often hear, especially relating to people first being exposed to the movement. I think this is enough to make using these labels useful, at least within certain contexts.

The Many Faces of Effective Altruism

Agreed, in retrospect it is pretty obvious that there is no good way to attach the prefix "a" to a word that starts with an "a" and have anyone intuitively get what you mean.

Load More