Hide table of contents

A few years ago, I made a outline of Evan G. Williams' excellent philosophy paper, for a local discussion group. It slowly got circulated on the EA internet. Somebody recently recommended that I make the summary more widely known, so here it is.

The paper is readable and not behind a paywall, so I'd highly recommend reading the original paper if you have the time.

Summary

I. Core claim

  1. Assuming moral objectivism (or a close approximation), we are probably unknowingly guilty of serious, large-scale wrong-doing (“ongoing moral catastrophe”).

II. Definition: What is a moral catastrophe? Three criteria:

  1. Must be a serious wrong-doing (closer to wrongful death or slavery than mild insults or inconveniences).
  2. Must be large-scale (instead of a single wrongful execution, or a single man tortured)
  3. Broad swathes of society are responsible through action or inaction (can’t be unilateral unavoidable actions by a single dictator).

III. Why we probably have unknown moral catastrophes. Two core arguments:

  1. The Inductive Argument
    1. Assumption: It’s possible to engage in great moral wrongdoings even while acting in accordance to your own morals, and that of your society.
      1. Basic motivation: an honest, sincere Nazi still seems to be acting wrongly in important ways.
      2. It’s not relevant whether this wrongdoing is due to mistaken empirical beliefs (All Jews are part of a major worldwide conspiracy) or wrong values (Jews are subhuman and have no moral value).
    2. Given that assumption in mind, pretty much every major society in history has acted catastrophically wrongly.
      1. Consider conquistadores, crusaders, caliphates, Aztecs etc. who conquered in the name of God(s), who they called good and just.
      2. It’s unlikely that all of these people in history only professed such a belief, and that all of them were liars instead of true believers.
      3. Existence proof: People can (and in fact do) do great evil without being aware of this.
    3. Us having ongoing moral catastrophes isn’t just a possibility, but probable.
      1. We are not that different from past generations: Literally hundreds of generations have thought that they actually were right and had figured out the One True Morality
      2. As recent as our parents’ generation, it was a common belief that some people have more rights than others because of race, sexuality etc.
      3. We live in a time of moral upheaval, where our morality is very different from our grandparents’.
      4. Even if some generation would eventually figure out All of Morality, the generation that gets everything right is probably a generation whose parents gets almost everything right.
  2. The Disjunctive Argument
    1. Activists are not exempt. Even if all your pet causes come to fruition, this doesn’t mean our society is good, because there are still unknown moral catastrophes.
    2. There are so many different ways that a society could get things very wrong, that it’s almost impossible to get literally everything right.
      1. This isn’t just a minor concern, we could be wrong in ways that are a sizable proportion of how bad the Holocaust is.
    3. There are many different kinds of ways that society could be wrong.
      1. We could be wrong about who has moral standing.(eg. fetuses, animals)
      2. We could be empirically wrong about what harms or hurts people who morally matter (eg. religious indoctrination of children)
      3. We could be right about some obligations but not others.
        1. We can act immorally in paying too much attention and using resources on false moral obligations (a la crusaders)
      4. We could be right about what’s wrong and should be fixed, but wrong at how to prioritize different fixes.
      5. We could be right about what’s wrong, but wrong about what is and is not our responsibility to fix. (eg. poverty, borders)
      6. We could be wrong about the far future (natalism, existential risk)
    4. Within each category, there are multiple ways to go wrong.
      1. Further, some are mutually exclusive. Eg. Pro-lifers could be right and abortion is a great sin, or fetuses don’t matter and it’s greatly immoral to deprive women of their freedom in eg. third trimester abortions.
      2. Unlikely that we’re currently at the golden mean for all of these trade-offs.
    5. Disjunction comes into play.
      1. Even if you believe that we’re 95% right at each major issue, and there are maybe 15 of them, the total probability that we are right is maybe ~.95^15~=46% (LZ: Assumes independence)
      2. In practice, 95% sure we’re right at each major issue seems way too confident, and 15 items too low.

IV. What should we do about it? 

  1. Discarded possibility: hedging. If you’re not sure, play it “safe”, morally speaking.
  2. Eg. even if you think farmed animals probably aren’t sentient, or sentience doesn’t morally matter, you can go vegetarian “just in case”
  3. This does NOT generally work well enough because it’s not robust: as noted, too many things can go wrong, some in contradictory directions.
  4. Recognition of Wrongdoing
    1. Actively try to figure out which catastrophic wrongs we’re committing
      1. Research more into practical fields (eg. animal consciousness) where we can be critically wrong
      2. Research more into moral philosophy
        1. Critical: bad to have increased technological knowledge w/o increased moral wisdom
        2. imagine Genghis Khan w/nuclear weapons
      3. These fields must interact
        1. Not enough for philosophers to say that animals are important if they are conscious and for scientists to say that dolphins are conscious but don’t know if this is important...our society must be able to integrate this.
    2. Need marketplace of ideas where true ideas win out
    3. Rapid intellectual progress is critical.
      1. If it’s worth fighting literal wars to defeat the Nazis or end slavery, it’s worth substantial material investment and societal loss to figure out what we’re currently doing wrong.
  5. Implementation of improved values
    1. Once we figure out what great moral wrongs we’ve committed, we want to be able to make moral reparations for past harms, or at least stop doing future harms in that direction as quickly as possible.
    2. To do this, we want to maximize flexibility in material conditions
      1. Extremely poor/war-torn societies would be unable to make rapid moral changes as needed
      2. LZ example: Complex systems built along specific designs are less resilient to shocks, and also harder to change, cf. Antifragile.
      3. In the same way we stock up resources for war preparation, we might want to save up resources for future moral emergencies, so we can eg. pay reparations, or at least quickly make the relevant changes.
        1. LZ: Unsure how this is actually possible in practice. Eg, individuals usually save by investing, and governments save by buying other government’s debt or by investing in the private sector, but it’s unclear how the world “saves” as a whole.
    3. We want to maximize flexibility in social conditions
      1. Even if it’s materially possible to make large changes, society might make such changes very difficult, because inertia and conservatism bias.
      2. Constitutional amendments, for example, are suspect.

V. Conclusion/Other remarks

  1. Counterconsideration One: Building a society that can correct moral catastrophes isn’t the same as actually correcting moral catastrophes.
  2. Counterconsideration Two : Many of the measures suggested above to prepare for correcting moral catastrophes may themselves be evil
    1. e.g. money spent on moral research could have instead been spent on global poverty, building a maximally flexible society might involve draconian restrictions on current people’s rights
  3. However, this is still worth doing in the short term.

 

This work is licensed under a Creative Commons Attribution 4.0 International License.

Comments17


Sorted by Click to highlight new comments since:

I considered Evan Williams' paper one of the most important papers in cause prioritization at the time, and I think I still broadly buy this. As I mention in this answer, there are at least 4 points his paper brought up that are nontrivial, interesting, and hard to refute.

If I were to write this summary again, I think I'd be noticeably more opinionated. In particular, a key disagreement I have with him (which I remember having at the time I was making the summary, but this never making it into my notes) is on the importance of the speed of moral progress vs the sustainability of continued moral progress. In "implementation of improved values", the paper focuses a lot on the flexibility of setting up society to be able  to make moral progress quickly, but naively I feel about as worried or more worried that society can make anti-progress and do horrifyingly dumb and new things in the name of good. So I'd be really worried about trajectory changes for the worse, especially longer-lasting ones ("lock-in" is a phrase that's in vogue these days).

I've also updated significantly on both the moral cost and the emprical probability of near-term extinction risks, and of course extinction is the archetypal existential risk that will dramatically curtail the value of the far future.

It feels weird getting my outline into the EA decade review, instead of the original paper, though I wouldn't be very surprised if at this point more EAs have read my outline than the paper itself.

I vaguely feel like Williams should get a lot more credit than he has received for this paper. Like EA should give him a prize or something, maybe help him identify more impactful research areas, etc.

Strong upvoting because I want to incentivize people to write and share more summaries.

Summaries are awesome and allow me to understand the high level of papers that I would not have read otherwise. This summary in particular is well written and well-formatted.

Thanks for writing it and sharing it!

Agreed! 5 years on and now we have LLM summarizers. Perhaps with LLM summarizer + human proof-reader there could be something interesting here.

I can highly recommend this mockumentary about how future people might judge our treatment of non-human animals: Carnage - Swallowing the Past

https://vimeo.com/798189243 

We absolutely welcome summaries! People getting more ways to access good material is one of the reasons the Forum exists.

That said, did you consider copying the summary into a Forum post, rather than linking it? That's definitely more work, but my impression is that it usually leads to more discussion when people don't have to click away into another page. I don't have strong evidence to back that up, though.

Also: because the title is long and long titles are cut short in some views of the Forum, I'd recommend that summaries of pieces be something like "The Possibility of an Ongoing Moral Catastrophe (Summary)".

We absolutely welcome summaries! People getting more ways to access good material is one of the reasons the Forum exists.

Yay!

did you consider copying the summary into a Forum post, rather than linking it?

Yes. I did a lot of non-standard formatting tricks in Google Docs when I first wrote it (because I wasn't expecting to ever need to port it over to a different format). So when I first tried to copy it over, the whole thing looked disastrously unreadable.

Changed the title. :)


[anonymous]2
1
0

The statement, "Need marketplace of ideas where true ideas win out", indicates the ineviatable failure to this approach.  There will never be agreement about the identity, or even the existence of, "true ideas".  

individuals usually save by investing, and governments save by buying other government’s debt or by investing in the private sector, but it’s unclear how the world “saves” as a whole"

I don't think this is particularly true. Government debt is not solely owned by other governments - otherwise it would be strange that all governments (as far as I know) have positive debt. Generally, I believe government debt is owned by the private sector (individual people, businesses, etc.). If we're talking about all governments having money saved up (because we would trust a government to pay large amounts of money to avert a crisis, whereas we would not normally trust the public to do so voluntarily), then that is possible. A good way of achieving that is having governments ensure their debt doesn't get too high (though austerity measures can have negative consequences too) - in fact this is advice that some give to governments: "don't have too much debt - otherwise if there's a crisis, you won't be able to afford to spend as much extra money". If you believe that it is morally crucial that governments are able to spend large amounts of money to avert a newfound moral crisis, then you might propose that they keep debt low, so they could spend extra large amounts if necessary. If you believe this to a very strong extent (which I do not), then you might even advocate that they have "negative" debt (which could be investing in the private sector).

Even though I hadn't considered this point before, and even though I consider it valid, I don't intuitively think it would have a large effect on the optimal level for debt - my instinctive guess is that this consideration might be worth ~5pp less of debt. I think it is sufficient for governments, when an extremely urgent cause is discovered, to increase taxes, decrease domestic spending, and send this extra money towards this new cause. I believe a more pressing imperative is that (rich) governments significantly increase their international aid payments now. The most effective charities are pretty good, and there's not too much risk of accidentally doing harm (I often argue that political causes are risky, though, because people are notoriously overconfident about their political views). I think it's unjustifiable that e.g. US healthcare expenditure could save many many more lives if partially diverted toward a poorer country.

About the main topic of the article, I do mostly agree, and especially so in cases involving non-human sentient beings. Given how much more time is devoted to human well-being, I think it is more difficult (but still, of course, very possible) for a particular human-centered catastrophe to slip under the radar. I nonetheless think advocating for an increase in "crisis awareness research" is very sensible in either case.

The idea of reparations doesn't have instrinsic value in my utilitarian morality, though I suppose it can often be incidentally relevant (in the same vein as money being more effective if spend on people who are more in need). The main priorities, in my opinion, should be reducing the chance of an ongoing catastrophe, and quickly (but also cautiously) stopping one when it is discovered.

 

Side Note: this is a linkpost, so maybe I shouldn't have commented here? This is my first non-trivial comment, and I got a bit carried away - apologies.

I'm entering philosophy grad school now, but in a few years I'm going to have to start thinking about designing courses, and I'm thinking of designing an intro course around this paper. Would it be alright if I used your summary as course material?

Sure! In general you can assume that anything I write publicly is freely available for academic purposes. I'd also be interested in seeing the syllabus if/when you end up designing it.

Definitely, I'll send it along when I design it. Since intro ethics at my institution is usually taught as applied ethics, the basic concept would be to start by introducing the students to the moral catastrophes paper/concept, then go through at least some of the moral issues Williams brings up in the disjunctive portion of the argument to examine how likely they are to be moral catastrophes. I haven't picked particular readings yet though as I don't know the literatures yet. Other possible topics: a unit on historical moral catastrophes (e.g. slavery in the South, the Holocaust); a unit on biases related to moral catastrophes; a unit on the psychology of evil (e.g. Baumeister's work on the subject, which I haven't read yet); a unit on moral uncertainty; a unit on whether antirealism can escape or accommodate the possibility of moral catastrophes.

Assignment ideas:

  1. pick one of the potential moral catastophes Williams mentions, which you think is least likely to actually be a moral catastrophe. Now, imagine that you are yourself five years from now and you’ve been completely convinced that it is in fact a moral catastrophe. What convinced you? Write a paper trying to convince your current self that it is a moral catastrophe after all.
  2. Come up with a potential moral catastrophe that Williams didn’t mention, and write a brief (maybe 1-2 pages?) argument for why it is or isn’t one (whatever you actually believe). Further possibility: Once these are collected, I observe how many people argued that the one they picked was not a moral catastrophe, and if it’s far over 50%, discuss with the class where that bias might come from (e.g. status quo bias, etc.).

This is all still in the brainstorming stage at the moment, but feel free to use any of this if you're ever designing a course/discussion group for this paper.

For #2, Ideological Turing Tests could be cool too.

Thanks for the summary. I have two takeaways:

1. EA is (in part) claiming that there are several ongoing moral catastrophes caused by inaction against global poverty, animal suffering, x-risk,... (some of them are definitely caused by action, but that does not matter as much on consequentialist grounds). Unknown ongoing moral catastrophes are cause-X.

2. The possibility of working to increase our capability to handle undiscovered ongoing moral catastrophe in the future as a major goal. The idea I saw here was to reserve resources, which is a very interesting argument to invest in economic growth.

You should sit in with some friends of Bill W. Tell them you're visitor, then just sit there and listen. They'll teach you about a pure altruism. It's pretty damn awesome.

In the same way we stock up resources for war preparation, we might want to save up resources for future moral emergencies, so we can eg. pay reparations, or at least quickly make the relevant changes.

  1. LZ: Unsure how this is actually possible in practice. Eg, individuals usually save by investing, and governments save by buying other government’s debt or by investing in the private sector, but it’s unclear how the world “saves” as a whole.

We could build up large stockpiles of resources, or capacity for making resources in the future, that could be flexibility used for a wide variety of purposes depending on where moral philosophy lead us. Essentially we are trying to predict what future resources moral philosophy might demand and get the ready. For example, we might spend less on consumption now and invest more effort in getting prepared to produce huge quantities of semiconductors and spaceships.

In short: the world saves by consuming less and investing more.

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f
LewisBollard
 ·  · 8m read
 · 
> How the dismal science can help us end the dismal treatment of farm animals By Martin Gould ---------------------------------------- Note: This post was crossposted from the Open Philanthropy Farm Animal Welfare Research Newsletter by the Forum team, with the author's permission. The author may not see or respond to comments on this post. ---------------------------------------- This year we’ll be sharing a few notes from my colleagues on their areas of expertise. The first is from Martin. I’ll be back next month. - Lewis In 2024, Denmark announced plans to introduce the world’s first carbon tax on cow, sheep, and pig farming. Climate advocates celebrated, but animal advocates should be much more cautious. When Denmark’s Aarhus municipality tested a similar tax in 2022, beef purchases dropped by 40% while demand for chicken and pork increased. Beef is the most emissions-intensive meat, so carbon taxes hit it hardest — and Denmark’s policies don’t even cover chicken or fish. When the price of beef rises, consumers mostly shift to other meats like chicken. And replacing beef with chicken means more animals suffer in worse conditions — about 190 chickens are needed to match the meat from one cow, and chickens are raised in much worse conditions. It may be possible to design carbon taxes which avoid this outcome; a recent paper argues that a broad carbon tax would reduce all meat production (although it omits impacts on egg or dairy production). But with cows ten times more emissions-intensive than chicken per kilogram of meat, other governments may follow Denmark’s lead — focusing taxes on the highest emitters while ignoring the welfare implications. Beef is easily the most emissions-intensive meat, but also requires the fewest animals for a given amount. The graph shows climate emissions per tonne of meat on the right-hand side, and the number of animals needed to produce a kilogram of meat on the left. The fish “lives lost” number varies significantly by