For this post, I'm going to use the scenario outlined in the science fiction book Seveneves by Neal Stephenson. It's a far-fetched scenario (and I leave out a lot of detail), but it sets up my point nicely, so bear with me. Full credit for the intro, of course, to Stephenson.
This is cross-posted from my blog.
Introduction
Humanity is in a near future state. Technology is slightly more advanced than it is today, and the International Space Station (ISS) is somewhat larger and more sophisticated. Long story short, the Moon blows up, and scientists determine humanity has two years before the surface of the Earth becomes uninhabitable for 5,000 years due to rubble bombardment.
Immediately, humanity works together to increase the size and sustainability of the ISS to ensure that humanity and its heritage (e.g. history, culture, animals and plants stored in a genetic format) can survive for 5,000 years to eventually repopulate the Earth. That this is a good thing to do is not once questioned. Humanity simply accepts as its duty that the diversity of life that exists today will continue at some point in the future. This is done with the acceptance that the inhabitants and descendants of the ISS will not have any easy life by any stretch of the imagination. But it is apparently their 'duty' to persevere.
The problem
It is taken as a given that stopping humanity from going extinct is a good thing, and I tend to agree, though not as strongly as some (I hold uncertainty about the expected value of the future assuming humanity/life in general survive). However, if we consider different ethical theories, we find that many come up with different answers to the question of what we ought to do in this case. Below I outline some of these possible differences. I say 'might' instead of 'will' because I've oversimplified things and if you tweak the specifics you might come up wit ha different answer. Take this as illustrative only.
Classical hedonistic utilitarian
If you think the chances of there being more wellbeing in the future are greater than there being more suffering (or put another way, you think the expected value of the future is positive), you might want to support the ISS.
If you think all life on Earth and therefore suffering will cease to exist if the ISS plan fails, you might want to actively disrupt the project to increase the probability that happens. At the very least, you probably won't want to support it.
I'm not really sure what a deontologist would think of this, but I suspect that they would at least be motivated to a different extent than a classical utilitarian.
Depending on how you see the specifics of the scenario, the 'ISS survives' case is roughly as good as the 'ISS fails' case.
Each of these ethical frameworks have significantly different answers to the question of 'what ought we do in this one specific case?' They also have very different answers to many current and future ethical dilemmas that are much more likely. This is worrying.
And yet, to my knowledge, there does not seem to be a concerted push towards convergence on a single ethical theory (and I'm not just talking about compromise). Perhaps if you're not a moral realist, this isn't so important to you. But I would argue that getting society at large to converge on a single ethical theory is very important, and not just for thinking about the great questions, like what to do about existential risk and the far future. It also possibly results in a lot of zero-sum games and a lot of wasted effort. Even Effective Altruists disagree on certain aspects of ethics, or hold entirely different ethical codes. At some point, this is going to result in a major misalignment of objectives, if it hasn't already.
I'd like to propose that simply seeking convergence on ethics is a highly neglected and important cause. To date, most of this seems to involve advocates for each ethical theory promoting their view, resulting in another zero-sum game. Perhaps we need to agree on another way to do this.
If ethics were a game of soccer, we'd all be kicking the ball in different directions. Sometimes, we happen to kick in the same direction, sometimes in opposite directions. What could be more important than agreeing on what direction to kick the ball and kicking it to the best possible world.
Thanks for sharing the moral parliament set-up Rick. It looks good, but looks incredibly similar to MacAskill's Expected Moral Value methodology!
I disagree a little with you though. I think that some moral frameworks are actually quite good at adapting to new and strange situations. Take, for example, a classical hedonistic utilitarian framework, which accounts for consciousness in any form (human, non-human, digital etc). If you come up with a new situation, you should still be able to work out which action is most ethical (in this case, which actions maximises pleasure and minimises pain). The answer may not be immediately clear, especially in tricky scenarios, and perhaps we can't be 100% certain about which action is best, but that doesn't mean there isn't an answer.
Regarding your last point about the downsides of taking utilitarianism to its conclusion, I think that (in theory at least) utilitarianism should take these into account. If applying utilitarianism harms your personal relationships and mental growth and ends up in a bad outcome, you're just not applying utilitarianism correctly.
Sometimes the best way to be a utilitarian is to pretend not to be a utilitarian, and there are heaps of examples of this in every day life (e.g. not donating 100% of your income because you may burn out, you may set an example that no one wants to reach... etc.).
Thank you Mike, all very good points. I agree that some frameworks, especially versions of utilitarianism, are quite good at adapting to new situations, but to be a little more formal about my original point, I worry that the resources and skills required to adapt these frameworks in order to make them ‘work’ makes them poor frameworks to rely on for a day-to-day basis. Expecting human beings to apply these frameworks ‘correctly’ is probably giving the forecasting and estimation ability of humans a little too much credit. For a reductive example, ‘do the m... (read more)