For me, basically every other question around effective altruism is less interesting than this basic one of moral obligation. It’s fun to debate whether some people/institutions should gain or lose status, and I participate in those debates myself, but they seem less important than these basic questions of how we should live and what our ethics should be.

Prompted by this quote from Scott Alexander's recent Effective Altruism As A Tower Of Assumptions, I'm linking a couple of my old LessWrong posts that speak to "these basic questions". They were written and posted before or shortly after EA became a movement, so perhaps many in EA have never read them or heard of these arguments. (I have not seen these arguments reinvented/rediscovered by others, or effectively countered/refuted by anyone, but I'm largely ignorant of the vast academic philosophy literature, in which the same issues may have been discussed.)

The first post, Shut Up and Divide?, was written in response to Eliezer Yudkowsky's slogan of "shut up and multiply" but I think also works as a counter against Peter Singer's Drowning Child argument, which many may see as foundational to EA. (For example Scott wrote in the linked post, "To me, the core of effective altruism is the Drowning Child scenario.")

The second post, Is the potential astronomical waste in our universe too small to care about?, describes a consideration through which someone who starts out with relatively high credence in utilitarianism (or utilitarian-ish values) may nevertheless find it unwise to devote much resources to utilitarian(-like) pursuits in the universe that we find ourselves in.

To be clear, I continue to have a lot of moral uncertainty and do not consider these to be knockdown arguments against EA or against caring about astronomical waste. There are probably counterarguments to them I'm not aware of (either in the existing literature or in platonic argument space), and we are probably still ignorant of many other relevant considerations. (For one such consideration, see my Beyond Astronomical Waste.) I'm drawing attention to them because many EAs may have too much trust in the foundations of EA in part because they're not aware of these arguments.

Comments21
Sorted by Click to highlight new comments since: Today at 6:21 AM

In response to “Shut Up and Divide:”

I think you should be in favor of caring more (shut up and multiply) over caring less (shut up and divide) because your intuitive sense of caring evolved when your sphere of influence was small. A tribe might have at most a few hundred people, which happens to be ~where your naive intuition stops scaling linearly.

So it seems like your default behavior should be extended to your new circumstances instead of extending your new circumstances to default state.

(Although, I think SUAD might be useful for not getting trapped in caring too much about unimportant news, for example).

(I’m writing this on my phone, please correct typos more than you otherwise would. For the same reason, this is fairly short, please steelman in additional details as necessary to convince yourself)

I think you should be in favor of caring more (shut up and multiply) over caring less (shut up and divide) because your intuitive sense of caring evolved when your sphere of influence was small.

Your argument proves too much:

  • My sex drive evolved before condoms existed. I should extend it to my new circumstances by reproducing as much as possible.
  • My subconscious bias against those who don't look like me evolved before there was a globalized economy with opportunities for positive-sum trade. Therefore, I should generalize to my new circumstances by becoming a neonazi.
  • My love of sweet foods evolved before mechanized agriculture. Therefore, I should extend my default behavior to my modern circumstances by drinking as much high-fructose corn syrup as I can.

I’m advocating for updating in the general direction of trusting your small-scale intuition when you notice a conflict between your large scale intuition and your small scale intuition.

Specifically:

  • Have as much sex as you want (with a consenting adult, etc). Have as many children as you can reasonably care for. But even if you disagree with that, I don’t think this is a good counterexample. It’s not a conflict between small scale beliefs and large scale beliefs. 
  • This is new information, not a small-large conflict. 
  • Same as above. 

As Wei Dai mentioned, tribes in the EEA weren't particularly fond of other tribes. Why should people's ingroup-compassion scale up, but their outgroup-contempt shouldn't? Your argument supports both conclusions.

This is a good point, I guess.

I don't think I understand what your argument is.

your intuitive sense of caring evolved when your sphere of influence was small

Even in our EEA we had influence beyond the immediate tribe, e.g., into neighboring tribes, which we were evolved to care much less about, hence inter-tribal raids, warfare, etc.

So it seems like your default behavior should be extended to your new circumstances instead of extending your new circumstances to default state.

I'm just not sure what you mean here. Can you explain with some other examples? (Are Daniel Kirmani's extrapolations of your argument correct?)

From my (new since you asked this) reply to Kirmani’s comment:

I’m advocating for updating in the general direction of trusting your small-scale intuition when you notice a conflict between your large scale intuition and your small scale intuition.

Honestly, its a pretty specific argument/recommendation so I’m having trouble thinking of another example that adds something. Maybe the difference between how I feel about my dog vs farmed animals, or near vs far people. If you’d like/it would help you or someone else, I can spend some more time thinking of one. 

(cross-posted)

Re: Shut Up and Divide. I haven't read the other comments here but…

For me, effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do the thing consistent with my second-order desires, something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will MacAskill). But I don't really detect in myself a symmetrical second-order want to NOT want to help strangers. So that's one thing that "Shut up and multiply" has over "shut up and divide," at least for me.

That said, I realize now that I'm often guilty of ignoring this second-orderness when e.g. making the case for effective altruism. I will often appeal to my interlocutor's occasional desire to help strangers and suggest they generalize it, but I don't symmetrically appeal to their clearer and more common disinterest in helping strangers and suggest they generalize THAT. To be more honest and accurate while still making the case for EA, I should be appealing to their second-order desires, though of course that's a more complicated conversation.

re: 'Shut Up and Divide', you might be interested in my post on leveling up vs down versions of impartiality, which includes some principled reasons to think the leveling up approach is better justified:

The better you get to know someone, the more you tend to (i) care about them, and (ii) appreciate the reasons to wish them well. Moreover, the reasons to wish them well don’t seem contingent on you or your relationship to them—what you discover is instead that there are intrinsic features of the other person that makes them awesome and worth caring about. Those reasons predate your awareness of them. So the best explanation of our initial indifference to strangers is not that there’s truly no (or little) reason to care about them (until, perhaps, we finally get to know them). Rather, the better explanation is simply that we don’t see the reasons (sufficiently clearly), and so can’t be emotionally gripped or moved by them, until we get to know the person better. But the reasons truly were there all along. 

It seems empirically false and theoretically unlikely (cf kin selection) that our emotions work this way. I mean, if it were true, how would you explain things like dads who care more about their own kids that they've never seen than strangers' kids, (many) married couples falling out of love and caring less about each other over time, the Cinderella effect?

So I find it very unlikely that we can "level-up" all the way to impartiality this way, but maybe there are other versions of your argument that could work (in implying not utilitarianism/impartiality but just that we should care a lot more about humanity in aggregate than many of us currently do). Before going down that route though, I'd like to better understand what you're saying. What do you mean by the "intrinsic features" of the other person that makes them awesome and worth caring about? What kind of features are you talking about?

One tendency can always be counterbalanced by another in particular cases; I'm not trying to give the full story of "how emotions work".  I'm just talking about the undeniable datum that we do, as a general rule, care more about those we know than we do about total strangers.

(And I should stress that I don't think we can necessarily 'level-up' our emotional responses; they may be biased and limited in all kinds of ways.  I'm rather appealing to a reasoned generalization from our normative appreciation of those we know best. Much as Nagel argues that we recognize agent-neutral reasons to relieve our own pain--reasons that ideally ought to speak to anyone, even those who aren't themselves feeling the pain--so I think we implicitly recognize agent-neutral reasons to care about our loved ones. And so we can generalize to appreciate that like reasons are likely to be found in others' pains, and others' loved ones, too.) 

I don't have a strong view on which intrinsic features do the work.  Many philosophers (see, e.g., David Velleman in 'Love as a Moral Emotion') argue that bare personhood suffices for this role. But if you give a more specific answer to the question of "What makes this person awesome and worth caring about?" (when considering one of your best friends, say), that's fine too, so long as the answer isn't explicitly relational (e.g. "because they're nice to me!"). I'm open to the idea that lots of people might be awesome and worth caring about for extremely varied reasons--for possessing any of the varied traits you regard as virtues, perhaps (e.g. one may be funny, irreverent, determined, altruistic, caring, thought-provoking, brave, or...).

I’m just talking about the undeniable datum that we do, as a general rule, care more about those we know than we do about total strangers.

There are lots of X and Y such that, as a general rule, we care more about someone in X than we do someone in Y. Why focus on X="those we know" and Y="total strangers" when this is actually very weak compared to other Xs and Ys, and explains only a tiny fraction of the variation in how much we care about different members of humanity?

(By "very weak" I mean suppose someone you know was drowning in a pond, and a total stranger was drowning in another pond that's slightly closer to you, for what fraction of the people you know, including e.g. people you know from work, would you instinctively run to save them over the total stranger? (And assume you won't see either of them again afterwards, so you don't run to save the person you know just to avoid potential subsequent social awkwardness.) Compare this with other X and Y.)

If I think about the broader variation in "how much I care" it seems it's almost all relational (e.g., relatives, people who were helpful to me in the past, strangers I happen to come across vs distant strangers). And if I ask "why?" the answer I get are like, "my emotions were genetically programmed to work that way" and "because of kin selection" and "it was a good way to gain friends/allies in the EEA". Intrinsic / non-relational features (either the features themselves, or how much I know or appreciate the features) just don't seem to enter that much into the equation.

(Maybe you could argue that upon reflection I'd want to self-modify away all that relational stuff and just value people based on their intrinsic features. Is that what you'd argue, and if so what's the actual argument? It seems like you sort of hint in this direction in your middle parenthetical paragraph, but I'm not sure.)

for what fraction of the people you know, including e.g. people you know from work, would you instinctively run to save them over the total stranger?

Uh, maybe 90 - 99%?  (With more on the higher end for people I actually know in some meaningful way, as opposed to merely recognizing their face or having chatted once or twice, which is not at all the same as knowing them as a person.)  Maybe we're just psychologically very different!  I'm totally baffled by your response here.

Yeah, seems like we've surfaced some psychological difference here. Interesting.

“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent. You only get symmetry if the adoption of ‘can now ethically ignore suffering of strangers’ as a moral principle is considered a win for the divide side. That’s the argument that would really shake the foundations of EA.

Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?

So actually we have three choices: divide, multiply, or be scope insensitive. In an ideal world populated by good and rational people, they’d probably still care relatively more about their families, but no one will be indifferent to the suffering of the far away. Loving and empathizing with strangers is widely agreed to be a vital and beautiful part of what makes us human, despite our imperfections. The fact that we have this particular cognitive bias of scope insensitivity may be fundamentally human in some sense, but it’s not really part of what makes us human. Nobody’s calling scope sensitive people sociopaths. Nobody’s personal idea of utopia elevates this principle of scope insensitivity to the level of ‘love others’.

Likewise, very few would prefer/imagine this idealized world as filled with ‘divide’ people rather than ‘multiply’ people. Because:

The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought.

Most people’s imagined inhabitants of utopia fit the former profile much more closely. So I think that “Shut Up and Divide” only challenges the Drowning Child argument insofar as you have very strange ethical intuitions, not shared by many. To really attack this foundation you’d have to argue for why these common intuitions about good and bad are wrong, not just that they’re ripe for inconsistencies when held by normal humans (which every set of ethical principles is).

So I think that “Shut Up and Divide” only challenges the Drowning Child argument insofar as you have very strange ethical intuitions, not shared by many.

Suppose I invented a brain modification machine and asked 100 random people to choose between:

  • M(ultiply): change your emotions so that you care much more in aggregate about humanity than your friends, family, and self
  • D(ivide): change your emotions so that you care much less about random strangers that you happen to come across than you currently do
  • S(cope insensitive): don't change anything.

Would most of them "intuitively" really choose M?

Most people’s imagined inhabitants of utopia fit the former profile much more closely.

From this, it seems that you're approaching the question differently, analogous to asking someone if they would modify everyone's brain so that everyone cares much more in aggregate about humanity (thereby establishing this utopia). But this is like the difference between unilaterally playing Cooperate in Prisoners' Dilemma, versus somehow forcing both players to play Cooperate. Asking EAs or potential EAs to care much more about humanity than they used to, and not conditional on everyone else doing the same, based on your argument, is like asking someone to unilaterally play Cooperate, while using the argument, "Wouldn't you like to live in a utopia where everyone plays Cooperate?"

I think most people would choose S because brain modification is weird and scary. This an intuition that's irrelevant to the purpose of the hypothetical but is strong enough to make the whole scenario less helpful. I'm very confident that ~0/100 people would choose D, which is what you're arguing for! Furthermore, if you added a weaker M that changed your emotions so that you simply care much more about random strangers than you currently do, I think many (if not most) people - especially among EAs - would choose that. Doubly so for idealized versions of themselves, the people they want to be making the choice. So again, you are arguing for quite strange intuitions, and I think the brain modification scenario reinforces rather than undermines that claim.

To your second point, we're lucky that EA cause areas are not prisoner's dilemmas! Everyday acts of altruism aren't prisoner's dilemmas either. By arguing that most people's imagined inhabitants of utopia 'shut up and multiply' rather than divide, I'm just saying that these utopians care *a lot* about strangers, and therefore that caring about strangers is something that regular people hold dear as an important human value, even though they often fail at it. Introducing the dynamics of an adversarial game to this broad truth is a disanalogy.

I’m very confident that ~0/100 people would choose D, which is what you’re arguing for!

In my post I said there's an apparent symmetry between M and D, so I'm not arguing for choosing D but instead that we are confused and should be uncertain.

By arguing that most people’s imagined inhabitants of utopia ‘shut up and multiply’ rather than divide, I’m just saying that these utopians care a lot about strangers, and therefore that caring about strangers is something that regular people hold dear as an important human value, even though they often fail at it.

Ok, I was confused because I wasn't expecting how you're using ‘shut up and multiply’. At this point I think you have a different argument for caring a lot about strangers which is different from Peter Singer's. Considering your own argument, I don't see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners' dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I'm all for that, but ultimately my own altruism values people's welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it's just raw unexplained intuitions, then I'm not sure we should put much stock in them.)

Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I'm not sure we can derive strong conclusions about human values based on these imaginations anyway.

In my post I said there's an apparent symmetry between M and D, so I'm not arguing for choosing D but instead that we are confused and should be uncertain.

You're right, I misrepresented your point here. This doesn't affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.

Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I'm not sure we can derive strong conclusions about human values based on these imaginations anyway.

I stand by my claim that 'loving non-kin' is a stable and fundamental human value, that over history almost all humans would include it (at least directionally) in their personal utopias, and that it only grows stronger upon reflection. Of course there's variation, but when ~all of religion and literature has been saying one thing, you can look past the outliers.

Considering your own argument, I don't see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners' dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I'm all for that, but ultimately my own altruism values people's welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it's just raw unexplained intuitions, then I'm not sure we should put much stock in them.)

I'm not explaining myself well. What I'm trying to say is that the symmetry between dividing and multiplying is superficial - both are consistent, but one also fulfills a deep human value (which I'm trying to argue for with the utopia example), whereas the other ethically 'allows' the circumvention of this value. I'm not saying that this  value of loving strangers, or being altruistic in and of itself, is fundamental to the project of doing good - in that we agree.

“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent.

This reasoning seems confused. Caring more about certain individuals than others is a totally valid utility function that you can have. You can't
especially care about individual people while simultaneously caring about everyone equally. You just can't. "Logically consistent" means that you don't claim to do both of these mutually exclusive things at once.

When I say “be consistent and care about individual strangers”, I mean shut up and multiply. There’s no contradiction. It’s caring about individual strangers taken to the extreme where you care about everyone equally. If you care about logical consistency that works as well as shut up and divide.

Curated and popular this week
Recent opportunities in Building effective altruism