For Existential Choices Debate Week, we’re trying out a new type of event: the Existential Choices Symposium. It'll be a written discussion between invited guests and any Forum user who'd like to join in.
How it works:
- Any forum user can write a top-level comment that asks a question or introduces a consideration, the answer of which might affect people’s answer to the debate statement[1]. For example: “Are there any interventions aimed at increasing the value of the future that are as widely morally supported as extinction-risk reduction?” You can start writing these comments now.
- The symposium’s signed-up participants, Will MacAskill, Tyler John, Michael St Jules, Andreas Mogensen and Greg Colbourn, will respond to questions, and discuss them with each other and other forum users, in the comments.
- To be 100% clear - you, the reader, are very welcome to join in any conversation on this post. You don't have to be a listed participant to take part.
This is an experiment. We’ll see how it goes and maybe run something similar next time. Feedback is welcome (message me with feedback here).
The symposium participants will be online between 3 - 5 pm GMT on Monday the 17th.
Brief bios for participants (mistakes mine):
- Will MacAskill is an Associate Professor of moral philosophy at the University of Oxford, and Senior Research Fellow at Forethought. He wrote the books Doing Good Better, Moral Uncertainty, and What We Owe The Future. He is the cofounder of Giving What We Can, 80,000 Hours, Centre for Effective Altruism and the Global Priorities Institute.
- Tyler John is an AI researcher, grantmaker, and philanthropic advisor. He is an incoming Visiting Scholar at the Cambridge Leverhulme Centre for the Future of Intelligence and an advisor to multiple philanthropists. He was previously the Programme Officer for emerging technology governance and Head of Research at Longview Philanthropy. Tyler holds a PhD in philosophy from Rutgers University—New Brunswick, where his dissertation focused on longtermist political philosophy and mechanism design, and the case for moral trajectory change.
- Michael St Jules is an independent researcher, who has written on “philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals”.
- Andreas Mogensen is a Senior Research Fellow in Philosophy at the Global Priorities Institute, part of the University of Oxford’s Faculty of Philosophy. His current research interests are primarily in normative and applied ethics. His previous publications have addressed topics in meta-ethics and moral epistemology, especially those associated with evolutionary debunking arguments.
- Greg Colbourn is the founder of CEEALAR and is currently a donor and advocate for Pause AI, which promotes a global AI moratorium. He has also supported various other projects in the space over the last 2 years.
Thanks for reading! If you'd like to contribute to this discussion, write some questions below which could be discussed in the symposium.
NB- To help conversations happen smoothly, I'd recommend sticking to one idea per top-level comment (even if that means posting multiple comments at once).
If we imagine that world C already exists, then yeah, we should try to change C into D.(Similarly, if world D already exists, we'd want to prevent changes from D to C.)
So, if either of the two worlds already exists, D>C.
Where the way you're setting up this argument turns controversial, though, is when you suggest that "D>C" is valid in some absolute sense, as opposed to just being valid (in virtue of how it better fulfills the preferences of existing people) under the stipulation of starting out in one of the worlds (that already contains all the relevant people).
Let's think about the case where no one exists so far, where we're the population planners for a new planet that can either shape into C or D. (In that scenario, there's no relevant difference between B and C, btw.) I'd argue that both options are now equally defensible because the interests of possible people are under-defined* and there are defensible personal stances on population ethics for justifying either.**
*The interests of possible people are underdefined not just because it's open how many people we might create. In addiiton, it's also open who we might create: Some human psychological profiles are such that when someone's born into a happy/priviledged life, they adopt a Buddhist stance towards existence and think of themselves as not having benefitted from being born. Other psychological profiles are such that people do think of themselves as grateful and lucky for having been born. (In fact, others yet even claim that they'd consider themselves lucky/grateful even if their lives consisted of nothing but torture). These varying intuitions towards existence can inspire people's population-ethical leanings. But there's no fact of the matter of "which intuitions are more true." These are just difference interpretations for the same sets of facts. There's no uniquely correct way to approach population ethics.
**Namely, C is better on anti-natalist harm reduction grounds (at least depending on how we interpret the scale/negative numbers on the scale), whereas D is better on totalist grounds.
All of that was assuming that C or D are the only options. If we add a third alternative, say, "create no one," the ranking between C and D (previously they were equally defensible) can change.
At this point, the moral realist proponents of an objective "theory of the good" might shriek in agony and think I have gone mad. But hear me out. It's not crazy at all to think that choices depend on the alternatives we have available. If we also get the option, "create no one," then I'd say C becomes worse than the two other options because there's no approach to population ethics according to which C is optimal from the three options. My person-affecting stance on population ethics says that we're free to do a bunch of things, but the one thing we cannot do is do things that reflect a negligent disregard for the interests of potential people/beings.
Why? Essentially for similar reasons why common-sense morality says that struggling lower-class families are permitted to have children that they raise under harship with little means (assuming their lives are still worth living in expectation), but if a millionaire were to do the same to their child, he'd be an asshole. The fact that the millionaire has the option "give my child enough resources to have a high chance at happiness" makes it worse if he then proceeds to give his child hardly any resources at all. Bringing people into existence makes you responsible for them. If you have the option to make your children really well off, but you decide not to do that, you're not taking into consideration the interests of your child, which is bad. (Of course, if the millionaire donates all their money to effective causes and then raises a child in relative poverty, that's acceptable again.)
I think where the proponents of an objective theory of the good go wrong is the idea that you keep track, on the same objective scoreboard, no matter whether it concerns existing people or potential people. But those are not commensurate perspectives. This whole idea of an "objective axiology/theory of the good" is dubious to me. It also has pretty counterintuitive implications to try to squeeze these perspectives under one umbrella. As I wrote elsewhere:
Here's a framework for doing population ethics without an objective axiology. In this framework, person-affecting views seem quite intuitive because we can motivate them as follows: