A few years ago, I made a outline of Evan G. Williams' excellent philosophy paper, for a local discussion group. It slowly got circulated on the EA internet. Somebody recently recommended that I make the summary more widely known, so here it is.
The paper is readable and not behind a paywall, so I'd highly recommend reading the original paper if you have the time.
Summary
I. Core claim
- Assuming moral objectivism (or a close approximation), we are probably unknowingly guilty of serious, large-scale wrong-doing (“ongoing moral catastrophe”).
II. Definition: What is a moral catastrophe? Three criteria:
- Must be a serious wrong-doing (closer to wrongful death or slavery than mild insults or inconveniences).
- Must be large-scale (instead of a single wrongful execution, or a single man tortured)
- Broad swathes of society are responsible through action or inaction (can’t be unilateral unavoidable actions by a single dictator).
III. Why we probably have unknown moral catastrophes. Two core arguments:
- The Inductive Argument
- Assumption: It’s possible to engage in great moral wrongdoings even while acting in accordance to your own morals, and that of your society.
- Basic motivation: an honest, sincere Nazi still seems to be acting wrongly in important ways.
- It’s not relevant whether this wrongdoing is due to mistaken empirical beliefs (All Jews are part of a major worldwide conspiracy) or wrong values (Jews are subhuman and have no moral value).
- Given that assumption in mind, pretty much every major society in history has acted catastrophically wrongly.
- Consider conquistadores, crusaders, caliphates, Aztecs etc. who conquered in the name of God(s), who they called good and just.
- It’s unlikely that all of these people in history only professed such a belief, and that all of them were liars instead of true believers.
- Existence proof: People can (and in fact do) do great evil without being aware of this.
- Us having ongoing moral catastrophes isn’t just a possibility, but probable.
- We are not that different from past generations: Literally hundreds of generations have thought that they actually were right and had figured out the One True Morality
- As recent as our parents’ generation, it was a common belief that some people have more rights than others because of race, sexuality etc.
- We live in a time of moral upheaval, where our morality is very different from our grandparents’.
- Even if some generation would eventually figure out All of Morality, the generation that gets everything right is probably a generation whose parents gets almost everything right.
- Assumption: It’s possible to engage in great moral wrongdoings even while acting in accordance to your own morals, and that of your society.
- The Disjunctive Argument
- Activists are not exempt. Even if all your pet causes come to fruition, this doesn’t mean our society is good, because there are still unknown moral catastrophes.
- There are so many different ways that a society could get things very wrong, that it’s almost impossible to get literally everything right.
- This isn’t just a minor concern, we could be wrong in ways that are a sizable proportion of how bad the Holocaust is.
- There are many different kinds of ways that society could be wrong.
- We could be wrong about who has moral standing.(eg. fetuses, animals)
- We could be empirically wrong about what harms or hurts people who morally matter (eg. religious indoctrination of children)
- We could be right about some obligations but not others.
- We can act immorally in paying too much attention and using resources on false moral obligations (a la crusaders)
- We could be right about what’s wrong and should be fixed, but wrong at how to prioritize different fixes.
- We could be right about what’s wrong, but wrong about what is and is not our responsibility to fix. (eg. poverty, borders)
- We could be wrong about the far future (natalism, existential risk)
- Within each category, there are multiple ways to go wrong.
- Further, some are mutually exclusive. Eg. Pro-lifers could be right and abortion is a great sin, or fetuses don’t matter and it’s greatly immoral to deprive women of their freedom in eg. third trimester abortions.
- Unlikely that we’re currently at the golden mean for all of these trade-offs.
- Disjunction comes into play.
- Even if you believe that we’re 95% right at each major issue, and there are maybe 15 of them, the total probability that we are right is maybe ~.95^15~=46% (LZ: Assumes independence)
- In practice, 95% sure we’re right at each major issue seems way too confident, and 15 items too low.
IV. What should we do about it?
- Discarded possibility: hedging. If you’re not sure, play it “safe”, morally speaking.
- Eg. even if you think farmed animals probably aren’t sentient, or sentience doesn’t morally matter, you can go vegetarian “just in case”
- This does NOT generally work well enough because it’s not robust: as noted, too many things can go wrong, some in contradictory directions.
- Recognition of Wrongdoing
- Actively try to figure out which catastrophic wrongs we’re committing
- Research more into practical fields (eg. animal consciousness) where we can be critically wrong
- Research more into moral philosophy
- Critical: bad to have increased technological knowledge w/o increased moral wisdom
- imagine Genghis Khan w/nuclear weapons
- These fields must interact
- Not enough for philosophers to say that animals are important if they are conscious and for scientists to say that dolphins are conscious but don’t know if this is important...our society must be able to integrate this.
- Need marketplace of ideas where true ideas win out
- Rapid intellectual progress is critical.
- If it’s worth fighting literal wars to defeat the Nazis or end slavery, it’s worth substantial material investment and societal loss to figure out what we’re currently doing wrong.
- Actively try to figure out which catastrophic wrongs we’re committing
- Implementation of improved values
- Once we figure out what great moral wrongs we’ve committed, we want to be able to make moral reparations for past harms, or at least stop doing future harms in that direction as quickly as possible.
- To do this, we want to maximize flexibility in material conditions
- Extremely poor/war-torn societies would be unable to make rapid moral changes as needed
- LZ example: Complex systems built along specific designs are less resilient to shocks, and also harder to change, cf. Antifragile.
- In the same way we stock up resources for war preparation, we might want to save up resources for future moral emergencies, so we can eg. pay reparations, or at least quickly make the relevant changes.
- LZ: Unsure how this is actually possible in practice. Eg, individuals usually save by investing, and governments save by buying other government’s debt or by investing in the private sector, but it’s unclear how the world “saves” as a whole.
- We want to maximize flexibility in social conditions
- Even if it’s materially possible to make large changes, society might make such changes very difficult, because inertia and conservatism bias.
- Constitutional amendments, for example, are suspect.
V. Conclusion/Other remarks
- Counterconsideration One: Building a society that can correct moral catastrophes isn’t the same as actually correcting moral catastrophes.
- Counterconsideration Two : Many of the measures suggested above to prepare for correcting moral catastrophes may themselves be evil
- e.g. money spent on moral research could have instead been spent on global poverty, building a maximally flexible society might involve draconian restrictions on current people’s rights
- However, this is still worth doing in the short term.
This work is licensed under a Creative Commons Attribution 4.0 International License.
I don't think this is particularly true. Government debt is not solely owned by other governments - otherwise it would be strange that all governments (as far as I know) have positive debt. Generally, I believe government debt is owned by the private sector (individual people, businesses, etc.). If we're talking about all governments having money saved up (because we would trust a government to pay large amounts of money to avert a crisis, whereas we would not normally trust the public to do so voluntarily), then that is possible. A good way of achieving that is having governments ensure their debt doesn't get too high (though austerity measures can have negative consequences too) - in fact this is advice that some give to governments: "don't have too much debt - otherwise if there's a crisis, you won't be able to afford to spend as much extra money". If you believe that it is morally crucial that governments are able to spend large amounts of money to avert a newfound moral crisis, then you might propose that they keep debt low, so they could spend extra large amounts if necessary. If you believe this to a very strong extent (which I do not), then you might even advocate that they have "negative" debt (which could be investing in the private sector).
Even though I hadn't considered this point before, and even though I consider it valid, I don't intuitively think it would have a large effect on the optimal level for debt - my instinctive guess is that this consideration might be worth ~5pp less of debt. I think it is sufficient for governments, when an extremely urgent cause is discovered, to increase taxes, decrease domestic spending, and send this extra money towards this new cause. I believe a more pressing imperative is that (rich) governments significantly increase their international aid payments now. The most effective charities are pretty good, and there's not too much risk of accidentally doing harm (I often argue that political causes are risky, though, because people are notoriously overconfident about their political views). I think it's unjustifiable that e.g. US healthcare expenditure could save many many more lives if partially diverted toward a poorer country.
About the main topic of the article, I do mostly agree, and especially so in cases involving non-human sentient beings. Given how much more time is devoted to human well-being, I think it is more difficult (but still, of course, very possible) for a particular human-centered catastrophe to slip under the radar. I nonetheless think advocating for an increase in "crisis awareness research" is very sensible in either case.
The idea of reparations doesn't have instrinsic value in my utilitarian morality, though I suppose it can often be incidentally relevant (in the same vein as money being more effective if spend on people who are more in need). The main priorities, in my opinion, should be reducing the chance of an ongoing catastrophe, and quickly (but also cautiously) stopping one when it is discovered.
Side Note: this is a linkpost, so maybe I shouldn't have commented here? This is my first non-trivial comment, and I got a bit carried away - apologies.