*
In my post I said there's an apparent symmetry between M and D, so I'm not arguing for choosing D but instead that we are confused and should be uncertain.
You're right, I misrepresented your point here. This doesn't affect the broader idea that the apparent symmetry only exists if you have strange ethical intuitions, which are left undefended.
Also, historically, people imagined all kinds of different utopias, based on their religions or ideologies. So I'm not sure we can derive strong conclusions about human values based on these imaginations anyway.
I stand by my claim that 'loving non-kin' is a stable and fundamental human value, that over history almost all humans would include it (at least directionally) in their personal utopias, and that it only grows stronger upon reflection. Of course there's variation, but when ~all of religion and literature has been saying one thing, you can look past the outliers.
Considering your own argument, I don't see a reason to care how altruistic other people are (including people in imagined utopias), except as a means to an end. That is, if being more altruistic helps people avoid prisoners' dilemmas and tragedy of the commons, or increases overall welfare in other ways, then I'm all for that, but ultimately my own altruism values people's welfare, not their values, so if they were not very altruistic, but say there was a superintelligent AI in the utopia that made it so that they had the same quality of life, then why should I care either way? Why should or do others care, if they do? (If it's just raw unexplained intuitions, then I'm not sure we should put much stock in them.)
I'm not explaining myself well. What I'm trying to say is that the symmetry between dividing and multiplying is superficial - both are consistent, but one also fulfills a deep human value (which I'm trying to argue for with the utopia example), whereas the other ethically 'allows' the circumvention of this value. I'm not saying that this value of loving strangers, or being altruistic in and of itself, is fundamental to the project of doing good - in that we agree.
I think most people would choose S because brain modification is weird and scary. This an intuition that's irrelevant to the purpose of the hypothetical but is strong enough to make the whole scenario less helpful. I'm very confident that ~0/100 people would choose D, which is what you're arguing for! Furthermore, if you added a weaker M that changed your emotions so that you simply care much more about random strangers than you currently do, I think many (if not most) people - especially among EAs - would choose that. Doubly so for idealized versions of themselves, the people they want to be making the choice. So again, you are arguing for quite strange intuitions, and I think the brain modification scenario reinforces rather than undermines that claim.
To your second point, we're lucky that EA cause areas are not prisoner's dilemmas! Everyday acts of altruism aren't prisoner's dilemmas either. By arguing that most people's imagined inhabitants of utopia 'shut up and multiply' rather than divide, I'm just saying that these utopians care *a lot* about strangers, and therefore that caring about strangers is something that regular people hold dear as an important human value, even though they often fail at it. Introducing the dynamics of an adversarial game to this broad truth is a disanalogy.
When I say “be consistent and care about individual strangers”, I mean shut up and multiply. There’s no contradiction. It’s caring about individual strangers taken to the extreme where you care about everyone equally. If you care about logical consistency that works as well as shut up and divide.
“Shut Up and Divide” boils down to “actually, you maybe shouldn’t care about individual strangers, because that’s more logically consistent (unless you multiply, in which case it’s equally consistent)”. But caring is a higher and more human virtue than being consistent, especially since there are two options here: be consistent and care about individual strangers, or just be consistent. You only get symmetry if the adoption of ‘can now ethically ignore suffering of strangers’ as a moral principle is considered a win for the divide side. That’s the argument that would really shake the foundations of EA.
Why should we derive our values from our native emotional responses to seeing individual suffering, and not from the equally human paucity of response at seeing large portions of humanity suffer in aggregate? Or should we just keep our scope insensitivity, like our boredom?
So actually we have three choices: divide, multiply, or be scope insensitive. In an ideal world populated by good and rational people, they’d probably still care relatively more about their families, but no one will be indifferent to the suffering of the far away. Loving and empathizing with strangers is widely agreed to be a vital and beautiful part of what makes us human, despite our imperfections. The fact that we have this particular cognitive bias of scope insensitivity may be fundamentally human in some sense, but it’s not really part of what makes us human. Nobody’s calling scope sensitive people sociopaths. Nobody’s personal idea of utopia elevates this principle of scope insensitivity to the level of ‘love others’.
Likewise, very few would prefer/imagine this idealized world as filled with ‘divide’ people rather than ‘multiply’ people. Because:
The weird thing is that both of these emotional self-modification strategies seem to have worked, at least to a great extent. Eliezer has devoted his life to improving the lot of humanity, and I've managed to pass up news and discussions about Amanda Knox without a second thought.
Most people’s imagined inhabitants of utopia fit the former profile much more closely. So I think that “Shut Up and Divide” only challenges the Drowning Child argument insofar as you have very strange ethical intuitions, not shared by many. To really attack this foundation you’d have to argue for why these common intuitions about good and bad are wrong, not just that they’re ripe for inconsistencies when held by normal humans (which every set of ethical principles is).
I’m using ‘friend group’ as something like a relatively small community with tight social ties and large and diverse set of semi-reliable identifiers.
EA attracts people who want to do large amounts of good. Weighted by engagement, the EA community is made up of people for whom this initial interest in EA was reinforced socially or financially, often both. Many EAs believe that AI alignment is an extremely difficult technical problem, on the scale of questions motivating major research programs in math and physics. My claim is that such a problem won’t be directly solved by this relatively tiny subset of technically-inclined do-gooders, nice people who like meet-ups and have suspiciously convergent interests outside of AI stuff.
EA is a friend group, algebraic geometers are not. Importantly, even if you don’t believe alignment is that difficult, we’d still solve it more quickly without tacking on this whole social framework. It worries me that alignment research isn’t catching on in mainstream academia (like climate change did); this seems to indicate that some factor in the post above (like groupthink) is preventing EAs from either constructing a widely compelling argument for AI safety, or making it compelling for outsiders who aren’t into the whole EA thing.
Basically we shouldn’t tie causes unnecessarily to the EA community - which is a great community - unless we have a really good reason.
This type of piece is what the Criticism contest was designed for, and I hope it gets a lot of attention and discussion. EA should have the courage of its convictions; global poverty and AI alignment aren't going to be solved by a friend group, let alone the same friend group.
I think the wording of your options is a bit misleading. It's valuable to publish your criticism of any topic that's taking up non-trivial EA resources, regardless of its true worth as a topic - otherwise we might be wasting bednets money. The important question is whether or not infinite ethics fits this category (I'm unsure, but my best guess is no right now and maybe yes in a few years). Whether or not something is a "serious problem" or "deserves criticism", at least for me, seems to point to a substantively different claim. More like, "I agree/disagree with the people who think infinite ethics is a valuable research field". That's not the relevant question.
That makes sense! I was interpreting your post and comment as a bit more categorical than was probably intended. Looking forward to your post.
I agree that your (excellent) analysis shows that the welfare increase is dominated by lifting the bottom half of the income distribution. I agree that this welfare effect is what we want. Pritchett's argument is linked to yours because he claims the only (and therefore best) way to cause this effect is national development. He writes: "all plausible, general, measures of the basics of human material wellbeing [including headcount poverty] will have a strong, non-linear, empirically sufficient and empirically necessary relationship to GDPPC." (Here non-linear refers to a stronger elasticity of these wellbeing metrics at lower than higher levels of GDPPC).
Of course as you point out national development can't really be the only thing that decreases poverty - redistribution would too. But every single data point we have of countries shows that the rich got rich through development, not redistribution. And every single data point we have of rich countries shows that the bottom half of their income distributions is doing very well, relative to LMICs. So yes, redistribution would cause great welfare gains for a bit, but it's not going to turn a $5000 GDPPC nation to a $50000 one. And the welfare gains from that nation's decreased poverty headcount are going to dwarf the redistribution-caused welfare gains, even given your adjustments. (This isn't an argument against redistribution as EA cause area, which could still be great; it's an argument that redistribution's efficacy isn't really a point against the greater importance of the search for growth).
Regarding the correlation/causation, I'd be more sympathetic to your point if it was a nice and average correlation. Pritchett: "The simple correlation between the actual $3.20/day or $5.50/day headcount poverty rate and headcount poverty as predicted using only the median of the country distribution is .994 and for $1.90 it is .991. These are about as high a correlation as real world data can produce." It's very implausible that this incredibly strong relationship would break with some new intervention that increases median consumption. Not a single policy in the history of the world that changed a country's median consumption has broken it.
To your final point that the cost of increasing median consumption might be way too high (relative to redistribution) - first of all, as Hillebrandt/Halstead pointed out, evaluating that claim should be a much larger priority in EA than it is right now. But development economics seems to have worked in the past, with just the expenses associated with a normal academic field! I'm sorry but I'm going to quote Pritchett again:
There are a number of countries (e.g. China, India, Vietnam, Indonesia) that said (1) “Based on our reading of the existing evidence (including from economists) we are going to shift from policy stance X to policy stance Y in order to accelerate growth”, (2) these countries did in fact shift from policy stance X to Y and (3) the countries did in fact have a large (to massive) accelerations of growth relative to [business as usual] as measured by standard methods (Pritchett et al 2016).
One had to be particularly stubborn and clever to make the argument: “Politicians changed policies to promote growth based on evidence and then there was growth but (a) this was just dumb luck, the policy shift did not actually cause the shift in growth something else did or (b) (more subtly) the adopted policies did work but that was just dumb luck as there was not enough evidence the policies would work for this to count as a win for ‘evidence’ changing policy.
TL;DR: Increasing productivity still beats redistribution in the long-term given reasonable assumptions about costs.
Great post! Quick note: clicking on the carets takes me to that same section rather than the longer intervention descriptions under 'List of prioritized interventions'.