Earlier today I posted a link to Andreas Mogensen's paper on "Maximal Cluelessness". But later I realized that this was just one among several important papers published yesterday on the Global Priorities Institute website. Rather than posting separate links to each, I'm linking to all of them below (abstract included when available).
Cotton-Barratt & Greaves, A bargaining-theoretic approach to moral uncertainty
This paper explores a new approach to the problem of decision under relevant moral uncertainty. We treat the case of an agent making decisions in the face of moral uncertainty on the model of bargaining theory, as if the decision-making process were one of bargaining among different internal parts of the agent, with different parts committed to different moral theories. The resulting approach contrasts interestingly with the extant “maximise expected choiceworthiness” and “my favourite theory” approaches, in several key respects. In particular, it seems somewhat less prone than the MEC approach to ‘fanaticism’: allowing decisions to be dictated by a theory in which the agent has extremely low credence, if the relative stakes are high enough. Overall, however, we tentatively conclude that the MEC approach is superior to a bargaining-theoretic approach.
Greaves & MacAskill, The case for strong longtermism
We believe that this neglect of the very long-term future is a grave moral error. An alternative perspective is given by a burgeoning view called longtermism, on which we should be particularly concerned with ensuring that the long-run future goes well. In this article we accept this view but go further, arguing that impacts on the long run are the most important feature of our actions. More precisely, we argue for two claims.
Axiological strong longtermism (AL): In a wide class of decision situations, the option that is ex ante best is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.
Deontic strong longtermism (DL): In a wide class of decision situations, the option one ought, ex ante, to choose is contained in a fairly small subset of options whose ex ante effects on the very long-run future are best.
MacAskill & Mogensen, The paralysis argument
Given plausible assumptions about the long-run impact of our everyday actions, we show that standard non-consequentialist constraints on doing harm entail that we should try to do as little as possible in our lives. We call this the Paralysis Argument. After laying out the argument, we consider and respond to a number of objections. We then suggest what we believe is the most promising response: to accept, in practice, a highly demanding morality of beneficence with a long-term focus.
Mogensen, Meaning, medicine and merit
Given the inevitability of scarcity, should public institutions ration healthcare resources so as to prioritize those who contribute more to society? Intuitively, we may feel that this would be somehow inegalitarian. I argue that the egalitarian objection to prioritizing treatment on the basis of patients’ usefulness to others is best thought of as semiotic: i.e. as having to do with what this practice would mean, convey, or express about a person’s standing. I explore the implications of this conclusion when taken in conjunction with the observation that semiotic objections are generally flimsy, failing to identify anything wrong with a practice as such and having limited capacity to generalize beyond particular contexts.
I consider whether a positive rate of pure intergenerational time preference is justifiable in terms of agent-relative moral reasons relating to partiality between generations, an idea I call discounting for kinship. I respond to Parfit's objections to discounting for kinship, but then highlight a number of apparent limitations of this approach. I show that these limitations largely fall away when we reflect on social discounting in the context of decisions that concern the global community as a whole.
MacAskill, Effective altruism
MacAskill, When should an effective altruist donate?