Meta:
- I'm re-posting this from my Shortform (with minor edits) because someone indicated it might be useful to apply tags to this post.
- This was originally written as quick summary of my current (potentially flawed) understanding in an email conversation.
- I'm not that familiar with the human progress/progress studies communities and would be grateful if people pointed out where my impression of them seems off, as well as for takes on whether I seem correct about what the key points of agreement and disagreement are.
- I think some important omissions from my summary might include:
- Potential differences in underlying ethical views
- More detail on why at least some 'progress studies' proponents have significantly lower estimates for existential risk this century, and potential empirical differences regarding how to best mitigate existential risk.
- Another caveat is that both the progress studies and the longtermist EA communities are sufficiently large that there will be significant diversity of views within these communities - which my summary sweeps under the rug.
[See also this reply from Tony from the 'progress studies' community .]
Here's a quick summary of my understanding of the 'longtermist EA' and 'progress studies' perspectives, in a somewhat cartoonish way to gesture at points of agreement and disagreement.
EA and progress studies mostly agree about the past. In particular, they agree that the Industrial Revolution was a really big deal for human well-being, and that this is often overlooked/undervalued. E.g., here's a blog post by someone somewhat influential in EA:
https://lukemuehlhauser.com/industrial-revolution/
Looking to the future, the progress studies community is most worried about the Great Stagnation. They are nervous that science seems to be slowing down, that ideas are getting harder to find, and that economic growth may soon be over. Industrial-Revolution-level progress was by far the best thing that ever happened to humanity, but we're at risk of losing it. That seems really bad. We need a new science of progress to understand how to keep it going. Probably this will eventually require a number of technological and institutional innovations since our current academic and economic systems are what's led us into the current slowdown.
If we were making a list of the most globally consequential developments from the past, EAs would in addition to the Industrial Revolution point to the Manhattan Project and the hydrogen bomb: the point in time when humanity first developed the means to destroy itself. (They might also think of factory farming as an example for how progress might be great for some but horrible for others, at least on some moral views.) So while they agree that the world has been getting a lot better thanks to progress, they're also concerned that progress exposes us to new nuclear-bomb-style risks. Regarding the future, they're most worried about existential risk - the prospect of permanently forfeiting our potential of a future that's much better than the status quo. Permanent stagnation would be an existential risk, but EAs tend to be even more worried about catastrophes from emerging technologies such as misaligned artificial intelligence or engineered pandemics. They might also be worried about a potential war between the US and China, or about extreme climate change. So in a sense they aren't as worried about progress stopping than they are about progress being mismanaged and having catastrophic unintended consequences. They therefore aim for 'differential progress' - accelerating those kinds of technological or societal change that would safeguard us against these catastrophic risks, and slowing down whatever would expose us to greater risk. So concretely they are into things like "AI safety" or "biosecurity" - e.g., making machine learning systems more transparent so we could tell if they were trying to deceive their users, or implementing better norms around the publication of dual-use bio research.
The single best book on this EA perspective is probably The Precipice by my FHI colleague Toby Ord.
Overall, EA and the progress studies perspective agree on a lot - they're probably closer than either would be to any other popular 'worldview'. But overall EAs probably tend to think that human progress proponents are too indiscriminately optimistic about further progress, and too generically focused on keeping progress going. (Both because it might be risky and because EAs probably tend to be more "optimistic" that progress will accelerate anyway, most notably due to advances in AI.) Conversely, human progress proponents tend to think that EA is insufficiently focused on ensuring a future of significant economic growth and the risks imagined by EAs either aren't real or that we can't do much to prevent them except encouraging innovation in general.
I think I have a candidate for a "worldview B" that some EAs may find compelling. (Edit: Actually, the thing I'm proposing also allocates some weight to trillions of years, but it differs from your "worldview A" in that nearer-term considerations don't get swamped!) It requires a fair bit of explaining, but IMO that's because it's generally hard to explain how a framework differs from another framework when people are used to only thinking within a single framework. I strongly believe that if moral philosophy had always operated within my framework, the following points would be way easier to explain.
Anyway, I think standard moral-philosophical discourse is a bit dumb in that it includes categories without clear meaning. For instance, the standard discourse talks about notions like, "What's good from a universal point of view," axiology/theory of value, irreducibly normative facts, etc.
The above notions fail at reference – they don't pick out any unambiguously specified features of reality or unambiguously specified sets from the option space of norms for people/agents to adopt.
You seem to be unexcited about approaches to moral reasoning that are more "more 'egoistic', agent-relative, or otherwise nonconsequentialist" than the way you think moral reasoning should be done. Probably, "the way you think moral reasoning should be done" is dependent on some placeholder concepts like "axiology" or "what's impartially good" that would have to be defined crisply if we wanted to completely solve morality according to your preferred evaluation criteria. Consider the possibility that, if we were to dig into things and formalize your desired criteria, you'd realize that there's a sense in which any answer to population ethics has to be at least a little bit 'egoistic' or agent-relative. Would this weaken your intuitions that person-affecting views are unattractive?
I'll try to elaborate now why I believe "There's a sense in which any answer to population ethics has to be at least a little bit 'egoistic' or agent-relative."
Basically, I see a tension between "there's an objective axiology" and "people have the freedom to choose life goals that represent their idiosyncrasies and personal experiences." If someone claims there's an objective axiology, they're implicitly saying that anyone who doesn't adopt an optimizing mindset around successfully scoring "utility points" according to that axiology is making some kind of mistake / isn't being optimally rational. They're implicitly saying it wouldn't make sense for people (at least for people who are competent/organized enough to reliably pursue long-term goals) to live their lives in pursuit of anything other than "pursuing points according to the one true axiology." Note that this is a strange position to adopt! Especially when we look at the diversity between people, what sorts of lives they find the most satisfying (e.g., differences between investment bankers, MMA fighters, novelists, people who open up vegan bakeries, people for whom family+children means everything, those EA weirdos, etc.), it seems strange to say that all these people should conclude that they ought to prioritize surviving until the Singularity so as to get the most utility points overall. To say that everything before that point doesn't really matter by comparison. To say that and any romantic relationships people enter are only placeholders until something better comes along with experience-machine technology.
Once you give up on the view that there's an objectively correct axiology (as well as the view that you ought to follow a wager for the possibility of it), all of the above considerations ("people differ according to how they'd ideally want to score their own lives") will jump out at you, no longer suppressed by this really narrow and fairly weird framework of "How can we subsume all of human existence into utility points and have debates on whether we should adopt 'totalism' toward the utility points, or come up with a way to justify taking a person-affecting stance."
There's a common tendency in EA to dismiss the strong initial appeal of person-affecting views because there's no elegant way to incorporate them into the moral realist "utility points" framework. But one person's modus ponens is another's modus tollens: Maybe if your framework can't incorporate person-affecting intuitions, that means there's something wrong with the framework.
I suspect that what's counterintuitive about totalism in population ethics is less about the "total"/"everything" part of it, and more related to what's counterintuitive about "utility points" (i.e., the postulate that there's an objective, all-encompassing axiology). I'm pretty convinced that something like person-affecting views, though obviously conceptualized somewhat differently (since we'd no longer be assuming moral realism) intuitively makes a lot of sense.
Here's how that would work (now I'll describe the new proposal for how to do ethical reasoning):
Utility is subjective. What's good for someone is what they deem good for themselves by their lights, the life goals for which they get up in the morning and try doing their best.
A beneficial outcome for all of humanity could be defined by giving individual humans the possibility to reflect about their goals in life under ideal conditions to then implement some compromise (e.g., preference utilitarianism, or – probably better – a moral parliament framework) to make everyone really happy with the outcome.
Preference utilitarianism or the moral parliament framework would concern people who already exist – these frameworks' population-ethical implications are indirectly specified, in the sense that they depend on what the people on earth actually want. Still, people individually have views about how they want the future to go. Parents may care about having more children, many people may care about intelligent earth-originating life not going extinct, some people may care about creating as much hedonium as possible in the future, etc.
In my worldview, I conceptualize the role of ethics as two-fold:
(1) Inform people about the options for wisely chosen subjective life goals
--> This can include life goals inspired by a desire to do what's "most moral" / "impartial" / "altruistic," but it can also include more self-oriented life goals
(2) Provide guidance for how people should deal with the issue that not everyone shares the same life goals
Population ethics, then, is a subcategory of (1). Assuming you're looking for an altruistic life goal rather than a self-oriented one, you're faced with the question of whether your notion of "altruism" includes bringing happy people into existence. No matter what you say, your answer to population ethics will be, in a weak sense, 'egoistic' or agent-relative, simply because you're not answering "What's the right population ethics for everyone." You're just answering, "What's my vote for how to allocate future resources." (And you'd be trying to make your vote count in an altruistic/impartial way – but you don't have full/single authority on that.)
If moral realism is false, notions like "optimal altruism" or "What's impartially best" are under-defined. Note that under-definedness doesn't mean "anything goes" – clearly, altruism has little to do with sorting pebbles or stacking cheese on the moon. "Altruism is under-defined" just means that there are multiple 'good' answers.
Finally, here's the "worldview B" I promised to introduce:
Within the anti-realist framework I just outlined, altruistically motivated people have to think about their preferences for what to do with future resources. And they can – perfectly coherently – adopt the view: "Because I have person-affecting intuitions, I don't care about creating new people; instead, I want to focus my 'altruistic' caring energy on helping people/beings that exist regardless of my choices. I want to help them by fulfilling their life goals, and by reducing the suffering of sentient beings that don't form world-models sophisticated enough to qualify for 'having life goals'."
Note that a person who thinks this may end up caring a great deal about humans not going extinct. However, unlike in the standard framework for population ethics, she'd care about this not because she thinks it's impartially good for the future to contain lots of happy people. Instead, she thinks it's good from the perspective of the life goals of specific, existing others, for the future to go on and contain good things.
Is that really such a weird view? I really don't think so, myself. Isn't it rather standard population-ethical discourse that's a bit weird?
Edit: (Perhaps somewhat related: my thoughts on the semantics of what it could mean that 'pleasure is good'. My impression is that some people think there's an objectively correct axiology because they find experiential hedonism compelling in a sort of 'conceptual' way, which I find very dubious.)