Hide table of contents

I have previously written about the importance of making global priorities research accessible to a wider range of people. Many people don’t have the time or desire to read academic papers, but the findings of the research are still hugely important and action-relevant.

The Global Priorities Institute (GPI) has started producing paper summaries, but even these might have somewhat limited readership given their length. They are also time-consuming for GPI to develop and aren’t all in one place.

With this in mind, and given my personal interest in global priorities research, I have written a few mini-summaries of GPI papers. The extra lazy / time poor can read just “The bottom lines”. I would welcome feedback on if these samples are useful and if I should continue to make them - working towards a post with all papers summarised. It is impossible to cover everything in just a few bullet points, but I hope my summaries successfully inform of the main arguments and key takeaways. Please note that for the final two summaries I made use of the existing GPI paper summaries.

On the desire to make a difference (Hilary Greaves, William MacAskill, Andreas Mogensen and Teruji Thomas)

The bottom line: Preferring to make a difference yourself is in deep tension with the ideals of benevolence. If we are to be benevolent, we should solely care about how much total good is done. In practice, this means avoiding tendencies to diversify individual philanthropic portfolios or to neglect mitigation of extinction risks in favour of neartermist options that seem “safer”.

My brief summary:

  • One can consider various types of “difference-making preferences” (DMPs), where one wants to do good themselves. One example is thinking of the difference one makes in terms of their own causal impact. This can make the world worse e.g. going to great lengths to be the one to save a drowning person even if other people are better placed to do so. This way of thinking is therefore in tension with benevolence.
  • One can instead hope to have higher outcome-comparison impact, where one compares how much better an outcome is if one acts, compared to if one does nothing. This would recommend not trying to save the drowning person, which seems the correct conclusion. However, the authors note that thinking of doing good in this way can still be in tension with benevolence. For example, one might prefer that a recent disaster were severe rather than mild so that they can do more good by helping affected people.
  • Under uncertainty, DMPs are also in tension with benevolence, in an action-relevant way. For example, being risk averse to the difference one individually makes sometimes means choosing an action that is (stochastically) dominated by another action - essentially choosing an action that is ‘objectively’ worse under uncertainty, with respect to doing good.
  • This can also be the case when people interact - the authors show that the presence of DMPs in collective action problems with uncertainty can lead to sub-optimal outcomes. Importantly they show that the preferences themselves are the culprits. This is also the case with DMPs under ambiguity aversion (ambiguity aversion means preferring known risks over unknown risks).
  • One could try to rationalise DMPs by saying people are trying to achieve ‘meaning’ in their life. But people who exhibit DMPs are generally motivated by the ideal of benevolence. It seems therefore that such people, if they really do want to be benevolent, should give up their DMPs.
  • See paper here.

The unexpected value of the future (Hayden Wilkinson)

The bottom line: An undefined expected value of the future doesn’t invalidate longtermism. A theory is developed to deal with undefined expected values and this theory leads to an even stronger longtermist conclusion than what we started with.

My brief summary:

  • Standard arguments for longtermism rely on a large expected value of the future. But there are pretty credible arguments that the expected value of the future is undefined! In this case expected value theory is rendered useless and we need to find an alternative theory if we are to choose between different actions.
  • One theory that works in important scenarios is expected utility theory with sensitivity to risk, because it reduces the importance of extreme outcomes in decision-making. But there are compelling arguments for risk neutrality - so can we find a theory that retains risk neutrality?
  • The author builds on previous work to develop an adequate theory of value that does so - one that considers value differences between different actions, and essentially ignores outcomes that are sufficiently unlikely to occur.
  • This theory strongly supports a longtermist conclusion - in fact it says it is infinitely better to improve the far future than the present. The case for longtermism becomes even stronger than what we started with!
  • See paper here.

Longtermism, aggregation, and catastrophic risk (Emma J. Curran)

The bottom line: If one is sceptical about aggregative views, where one can be driven by sufficiently many small harms outweighing a smaller number of large harms, one should also be sceptical about longtermism.

My brief summary:

  • Longtermists generally prefer reducing catastrophic risk to saving lives of people today. This is because, even though you would be reducing probability of harm by a small amount if focusing on catastrophic risk, the expected vastness of the future means more good is done in expectation.
  • This argument relies on an aggregative view where we should be driven by sufficiently many small harms outweighing a smaller number of large harms. However there are some cases where we might say such decision-making is impermissible e.g. letting a man get run over by a train instead of pulling a lever to save the man but also make lots of people late for work. One argument for why it’s better to save the man from death is the separateness of persons - there is no actual person who experiences the sum of the individual harms of being late - so there can be no aggregate complaint.
  • The author shows that a range of non-aggregative views (where we are not driven by sufficiently many small harms outweighing fewer large ones), under different treatments of risk, undermine the case for longtermism. These views typically generate extremely weak claims of assistance from future people.
  • See paper here.

The case for strong longtermism (Hilary Greaves and William MacAskill)

The bottom line: Humanity’s future could be vast, and we can influence its course. That suggests the truth of strong longtermism: impact on the far future is the most important feature of our actions today.

My brief summary:

  • The expected number of future lives is vast. You only need non-negligible probabilities of humanity surviving until the earth becomes uninhabitable, spreading into space, or creating digital sentience.
  • We can predictably improve the far future by steering between persistent states that differ in long-term value. A persistent state is one which – upon coming about – tends to persist for a long time. One way to steer between persistent states is to reduce the risk of premature human extinction - which would therefore be a pressing goal given the vastness of the future.
  • Under a person-affecting view of population ethics where we care about making lives good but not making good lives, reducing risks of extinction isn’t important. But there are alternative interventions that would still be good for the long-term future - such as guiding the development of artificial super intelligence (ASI). ASI is likely to be influential and long-lasting, so ensuring it has the right values would be good for the long-term future under all plausible moral views.
  • Uncertainty does not undermine the case for strong longtermism because we also have ‘meta’ options for improving the far future such as conducting further research and investing resources for use at some later time.
  • The authors don’t think that cluelessness about far-future effects of our actions or the fact that strong longtermism might hinge on tiny probabilities of enormous values (fanaticism) undermines the case for strong longtermism. Fanaticism is one of the most pressing objections, but denying fanaticism has implausible consequences and the probabilities might not be so small that fanaticism becomes an issue.
  • As well as strong longtermism being justified on an axiological basis (making a claim about the value of our actions) we can also justify it on deontic grounds (in terms of what we should do). The authors argue for a deontic justification, as improving the far future is far more valuable than focusing on the short-term, can be done at comparatively small cost, and does not violate any serious moral constraints. These conditions mean we should be driven to act by strong longtermism.
  • See longer summary here and paper here.

The Epistemic Challenge to Longtermism (Christian Tarsney)

The bottom line: If we are happy with expected value theory and don’t mind being driven by very small probabilities, longtermism holds up well. However, if we don’t like being fanatical, the epistemic challenge against longtermism seems fairly serious.

My brief summary:

  • One broad class of strategies for improving the long-term future are “persistent-difference strategies” (PDSs) where one tries to put the world into a better state of the world than it would have been otherwise, and hopes that this state persists for a long time.
  • But one might think it is too difficult to identify ways to do this. For example, such strategies might be threatened by “exogenous nullifying events” (ENEs), which nullify the effect of our PDSs. Negative ENEs, such as existential catastrophes, put the world in a less good persistent state.
  • If we assume that we will settle star systems one day (cubic growth) then, provided that the (constant) probability of ENEs in the far future is low enough, a typical longtermist intervention should be better than a neartermist one. This is because potential value would be huge. The author thinks the probability of ENEs is likely to be low enough for the longtermist intervention to win.
  • However, a model in which we don’t spread to the stars and we eventually reach zero growth (steady state model) is more pessimistic as we would need an unrealistically low probability of negative ENEs occurring in the far future for a longtermist intervention to beat a neartermist one. This rests on conservative assumptions though, and if we relax these the case for longtermism becomes more credible again.
  • The case for longtermism is also strengthened once we account for uncertainty. For example, we might consider that cubic growth is very unlikely, and also that it results in only a very small probability of very high value (like a Dyson sphere). Even in this case, despite arguably very small probabilities, the expected value of longtermist interventions still easily beats neartermist ones, because the potential value is huge.
  • So if we are happy with expected value theory and don’t mind being driven by very small probabilities, longtermism seems to hold up well. However, if we don’t like being fanatical, the epistemic challenge against longtermism seems fairly serious.
  • See longer summary here and paper here.
Comments17
Sorted by Click to highlight new comments since: Today at 1:51 PM
EJT
1y15
8
0

Nice post! Consider this a vote for more summaries.

Thanks Elliott! I wasn’t sure how you’d react to these summaries. I’m very happy to continue to make them. It’s also for my benefit so I can easily remind myself what a paper said.

I think I’ll get back in touch with you or Rossa in the near future to offer if I can do anything else with regards to helping GPI research get heard.

+1 as a vote for more summaries and thanks a lot for doing these! I'll check in with Sven (who's been organising our paper summaries) and we'll get in touch soon

Thanks Rossa, very happy to keep doing these if you think they’re useful!

I’m conscious of maximising impact and not inadvertently doing harm, so would be happy to speak to anyone at GPI about how to use my time as effectively as possible, even if that means not doing much!

Sounds good!

Thanks for writing this Jack! This is a really helpful collection of summarized papers, and I wish there was more work like it.

Thanks! I am likely to continue to make these summaries and would be happy to share them.

Yeah, this is cool! I recently taught a longtermism MA course, am currently doing an online fellowship version of the course, and have been reading a good amount of GPI's philosophy stuff, so I might be interested in helping out if you'd find that useful.

Hey William. I would welcome some help and you seem highly qualified! I'll message you and perhaps we can work together on this. Thanks for getting in touch!

I find these summaries quite valuable. Thanks for doing them, and hopefully there will be more!

Glad to hear it. I do plan on doing more!

Thanks for this. I really think we should have more paper summaries like this, on a regular basis.

There’s a point that caught my attention

Longtermism, aggregation, and catastrophic risk (Emma J. Curran)

[…]

This argument relies on an aggregative view where we should be driven by sufficiently many small harms outweighing a smaller number of large harms. However there are some cases where we might say such decision-making is impermissible e.g. letting a man get run over by a train instead of pulling a lever to save the man but also make lots of people late for work. One argument for why it’s better to save the man from death is the separateness of persons - there is no actual person who experiences the sum of the individual harms of being late - so there can be no aggregate complaint.

I really liked this paper and its whole argument. On the other hand, and I here I’m probably even going against the usual deontologist literature, I’m not sure that the problem with these counter-intuitive examples of aggregating small harms / pleasures is aggregation per se, but that in such cases hedonist aggregation tends to conflict with other types of aggregation – such as through a preference-based ordinal social welfare function (for instance, if every individual prefers a slight delay to having someone killed, then nobody should be killed)  – or that they might violate something like a Golden Rule (if I wouldn’t want to die to avoid millions of minor delays, then I must not want to let someone die to avoid small delays). I suspect that just saying, like Rawls and Scanlon etc., that aggregation violates “separateness of persons” turns an interesting discussion into a “fight between strawmen"[1]

  1. ^

    EAs sometimes ridicule people for siding with deontologists in such dilemmas. Rob Wiblin once said to A. Mogensen (during an 80kh podcast interview) that:
    “[...] at least for myself, as I mentioned, I actually don’t share this intuition at all, that there’s no number of people who could watch the World Cup where it would be justified to allow someone to die by electrocution. And in fact, I think that intuition that there’s no number is actually crazy and ridiculous and completely inconsistent with other actions that we take all the time.”
    If you agree with Rob’s statement, ask yourself questions like:
    a)    Would you die to allow millions to watch the World Cup?
    b)    Would you want someone to die to allow you to watch the World Cup - if that’s the only way?
    c)    Would you support a norm (or vote for a law) stating that it is OK to let people die so we can watch the World Cup?
    d) If we were to vote to let Bernard die for us to watch the World Cup, would you vote yes?
    e)    Do you think others would (usually) answer “yes” to these previous questions?
    Nothing here contradicts that we do let people die (though in situations where they voluntarily choose to take some risk in exchange of fair previous compensation) for us to watch the World Cup; not even that the world is a “better place” (in the sense that, e.g., there’s more welfare) if people die for our watching the World Cup. It might be the optimal policy, indeed.
    But I think that, if you answered “no” to some of the questions above, you are not entitled to say that this intuition is “crazy and ridiculous”. After all, if you prefer to save a life to watching the World Cup, and if you think others would reason similarly, why do you think that it is “crazy” to state that we should interrupt the show to save one person?
    It’s true that I might be conflating individual preferences and moral preferences / judgment here, but I am not sure about how easy it is to separate them; I’d probably lose any pleasure in watching a match if I knew someone unwillingly died for it – and I would certainly not say “Well, too bad; but by the Sure Thing Principle, it should not affect my preferences – may they have not died in vain”. Just like in the literature about the connection between perception and judgment, particularly when it comes to providing contexto, I think our individual preferences and mental states are deeply connected to more abstract judgments regarding norms.
    Sorry for this long footnote, since it's not exaclty related to the core of the post, I felt it'd be inappropriate to insert it in the main comment.

+1 to a desire to read GPI papers but never having actually read any because I perceive them to be big and academic at first glance. 

I have engaged with them in podcasts that felt more accessible, so maybe  there's something there.

Thanks. Did you find these summaries to be more accessible?

Is there any scope for people to do this on an ad-hoc/crowdsourced basis? I used to a similar thing for medical AI papers (https://explainthispaper.com), where volunteers would summarise them and then the coordinators would vet, publish and distribute the summaries- is there a similar process that happens here?

I like this idea. One example of it within the EA sphere was the AI Safety Distillation Contest.

I would be interested in a Minimal Viable Product version of what you describe above. Perhaps where a group of individuals each attempt to make a mini summary of a paper/post of interest - holding each other accountable. If it has sufficient traction an more robust system as you describe above could be put in place. Would you be interested?

For motivation - Lizka writes a good breakdown of why things like this might be useful Distillation and research debt

To my knowledge this doesn’t happen, but it’s not a bad idea. There are quite a few research organisations and it would be great to have easily-digestible summaries all saved in one place.

More from JackM
Curated and popular this week
Relevant opportunities