For Existential Choices Debate Week, we’re trying out a new type of event: the Existential Choices Symposium. It'll be a written discussion between invited guests and any Forum user who'd like to join in.
How it works:
- Any forum user can write a top-level comment that asks a question or introduces a consideration, the answer of which might affect people’s answer to the debate statement[1]. For example: “Are there any interventions aimed at increasing the value of the future that are as widely morally supported as extinction-risk reduction?” You can start writing these comments now.
- The symposium’s signed-up participants, Will MacAskill, Tyler John, Michael St Jules, Andreas Mogensen and Greg Colbourn, will respond to questions, and discuss them with each other and other forum users, in the comments.
- To be 100% clear - you, the reader, are very welcome to join in any conversation on this post. You don't have to be a listed participant to take part.
This is an experiment. We’ll see how it goes and maybe run something similar next time. Feedback is welcome (message me with feedback here).
The symposium participants will be online between 3 - 5 pm GMT on Monday the 17th.
Brief bios for participants (mistakes mine):
- Will MacAskill is an Associate Professor of moral philosophy at the University of Oxford, and Senior Research Fellow at Forethought. He wrote the books Doing Good Better, Moral Uncertainty, and What We Owe The Future. He is the cofounder of Giving What We Can, 80,000 Hours, Centre for Effective Altruism and the Global Priorities Institute.
- Tyler John is an AI researcher, grantmaker, and philanthropic advisor. He is an incoming Visiting Scholar at the Cambridge Leverhulme Centre for the Future of Intelligence and an advisor to multiple philanthropists. He was previously the Programme Officer for emerging technology governance and Head of Research at Longview Philanthropy. Tyler holds a PhD in philosophy from Rutgers University—New Brunswick, where his dissertation focused on longtermist political philosophy and mechanism design, and the case for moral trajectory change.
- Michael St Jules is an independent researcher, who has written on “philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals”.
- Andreas Mogensen is a Senior Research Fellow in Philosophy at the Global Priorities Institute, part of the University of Oxford’s Faculty of Philosophy. His current research interests are primarily in normative and applied ethics. His previous publications have addressed topics in meta-ethics and moral epistemology, especially those associated with evolutionary debunking arguments.
- Greg Colbourn is the founder of CEEALAR and is currently a donor and advocate for Pause AI, which promotes a global AI moratorium. He has also supported various other projects in the space over the last 2 years.
Thanks for reading! If you'd like to contribute to this discussion, write some questions below which could be discussed in the symposium.
NB- To help conversations happen smoothly, I'd recommend sticking to one idea per top-level comment (even if that means posting multiple comments at once).
Thank you, Will, excellent questions. And thanks for drawing out all of the implications here. Yeah I'm a super duper bullet biter. Age hasn't dulled my moral senses like it has yours! xP
Yes, I take (2) on the 1 vs 2 horn. I think I'm the only person who has my exact values. Maybe there's someone else in the world, but not more than a handful at most. This is because I think our descendants will have to make razor-thin choices in computational space about what matters and how much, and these choices will amount to Power Laws of Value.
I generally like your values quite a bit, but you've just admitted that you're highly scope insensitive. So even if we valued the same matter equally as much, depending on the empirical facts it looks like I should value my own judgment potentially nonillions as much as yours, just on scope sensitivity grounds alone!
Yup, I am worried about this and I am not doing much about it. I'm worried that the best thing that I could do would simply be to go into cryopreservation right now and hope that my brain is uploaded as a logically omniscient emulation with its values fully locked in and extrapolated. But I'm not super excited about making that sacrifice. Any tips on ways to tie myself to the mast?
It would be something like: P(people converge on my exact tastes without me forcing them to) + [P(kind of moral or theistic realism I don't understand)*P(the initial conditions are such that this convergence happens)*P(it happens quickly enough before other values are locked in)*P(people are very motivated by these values)]. To hazard an-off-the cuff guess, maybe 10^-8 + 10^-4*0.2*0.3*0.4, or about 2.4*10^-6.
I should be more humble about this. Maybe it turns out there just aren't that many free parameters on moral value once you're a certain kind of hedonistic consequentialist who knows the empirical facts and those people kind of converge to the same things. Suppose that's 1/30 odds vs my "it could be anything" modal view. Then suppose 1/20 elites become that kind of hedonistic consequentialist upon deliberation. Then it looks like we control 1/600th of the resources. I'm just making these numbers up, but hopefully they illustrate that this is a useful push that makes me a bit less pessimistic.
Maybe 1/20 that we do get to a suitably ideal kind of trade. I believe what I want is a pretty rivalrous good, i.e. stars, so at the advent of ideal trade I still won't get very much of what I want. But it's worth thinking about whether I could get most of what I want in other ways, such as by trading with digital slave-owners to make their slaves extremely happy, in a relatively non-rivalrous way.
I don't have a clear view on this and think further reflection on this could change my views a lot.