Hide table of contents

Andreas Mogensen and Will MacAskill’s “The Paralysis Argument” (Mogensen & MacAskill 2021) argues that deontologists, by their own lights, ought to be altruistic longtermists.

This sort of work is important for altruists to undertake. The standard arguments for EA and longtermist ideas are most easily made within a consequentialist framework, but most people aren’t consequentialists.[1] A successful appeal to deontological morality could greatly increase EA’s reach and impact.

In my view, Mogensen and MacAskill’s Paralysis Argument is unlikely to do that job. Before I say why, let me explain how the argument works. The thumbnail-sketch version goes as follows: 

  1. Almost all of our ordinary actions have unfathomably many long-term effects.
  2. A substantial share of these effects involve serious harm to innocent people.
  3. Deontologists believe we have powerful moral reasons against seriously harming innocent people, which can be outweighed (if at all) only by achieving comparatively enormous benefits.
  4. Committing oneself to altruistic longtermism offers a way—maybe the only way—for one’s actions to have the enormously beneficial effects required to outweigh the many harms one inevitably causes by doing anything at all.
  5. So deontologists are left with a choice between paralysis (i.e., doing as little as possible in order to avoid causing impermissible harms) and longtermist altruism (i.e., working to realize benefits large enough to outweigh the harms one causes).

My criticisms of the argument will take two main forms. 

First, I’ll argue that deontologists are committed to the conclusion of the Paralysis Argument just in case they’re committed to anti-natalism. Since most deontologists reject anti-natalism, they can and should also reject the Paralysis Argument, and indeed some similar grounds for dissent are available in both cases. 

Second, even if the argument were otherwise correct, commitment to longtermist EA wouldn’t offer the deontologist an escape from paralysis. On the contrary, by Mogensen and MacAskill’s own reasoning, any act that raises the probability of a long human future is almost certainly impermissible. So the Paralysis Argument isn’t just unconvincing, it’s the worst possible recruitment tool for its target audience.

The deontological constraints mentioned in premise 3 form the heart of the Paralysis Argument. Before getting to my criticisms, I’ll explain those constraints in more detail.

Deontology and constraints

Broadly speaking, strict consequentialists think we ought to take whichever action has the best (expected) outcome. Strict deontologists, by contrast, think some acts are morally obligatory or prohibited regardless of their outcomes. Prohibited acts are often taken to include things like harming or killing innocent people.

Thought experiments seem to provide some intuitive evidence for deontology. Should you harvest one innocent person’s organs in order to save three lives? This seems morally unacceptable, as deontology predicts.

But consequentialists can respond with an intuition pump of their own. What if killing an innocent person is the only way to avert a global catastrophe and save billions of lives? It’s much less clear that doing so would be morally wrong. Indeed, it would plausibly be morally required. If so, strict deontology is mistaken.

One popular response to this sort of problem is to abandon the claim that harming innocents is absolutely prohibited, and to suggest instead that there are simply strong moral reasons against harming innocents. In particular, the moral reasons against causing harms are much stronger than the moral reasons in favor of causing benefits of a similar magnitude. This principle is the Harm-Benefit Asymmetry (HBA). 

By embracing the HBA, deontologists can get intuitively satisfactory results for the cases mentioned above. Killing one to save three is impermissible because the benefit is too small in relation to the harm. By contrast, killing one to save billions is permissible because the benefit-harm ratio is sufficiently large.

Another standard deontological principle is the Doctrine of Doing and Allowing (DDA). The DDA says that moral reasons against harming are stronger than moral reasons against merely allowing similar harms to occur. This explains why, for instance, it seems impermissible to harvest one person’s organs to save three others, but permissible to forgo saving one drowning person in order to save three. 

Identity-affecting consequences and the unavoidability of harm

I don’t intend to dispute the first two premises of the Paralysis Argument. Mogensen and MacAskill’s defense of these claims introduces some important machinery, though, so I’ll briefly explain this reasoning.

The general idea is this. Many of our everyday actions turn out to have identity-affecting consequences. For instance, driving to the grocery store slightly delays a number of others on the road. Some of those people will go on to conceive children later that day. Thanks to your influence, these conceptions happen at a slightly different time than they otherwise would have—at 8:41 instead of 8:38, say. A delay of a few minutes (or even seconds) virtually ensures that the egg will be fertilized by a different sperm. And so, according to standard views about identity, you cause a different person to be conceived than the one who would have existed otherwise.

Since you (partly) caused this person’s existence, you ipso facto (partly) caused them to have all the experiences they’ll ever have, and all the effects they’ll have on others. The same goes for the experiences of their descendants and their effects on others, and so on.

It's likely that you inadvertently commit a great many identity-affecting acts in your lifetime. And each such act has enormously many consequences. It’s practically certain that a substantial number of these myriad effects will involve harms. As Mogensen and MacAskill write, “Somewhere down the line… we can expect that some people will die young who would otherwise have lived long and healthy lives if only you had not driven to the supermarket to buy milk on that fateful day” (3).

All this, I think, is hard to deny. Most people probably do perform many identity-affecting acts in their lifetimes, and in general these acts can be expected to lead to harms, some percentage of them serious. 

Benefits and the HBA

Of course, in addition to harms, our identity-affecting acts bring about many unforeseen benefits. If your act partly causes Claire to exist, it also partly causes all the joys, triumphs and everyday pleasures she’ll experience, as well as all the good she’ll do for others. Some of this good might involve saving innocent lives or dramatically improving them. 

If identity-affecting acts reliably bring about many benefits as well as many harms, why should they trouble deontologists so much? Because of the Harm-Benefit Asymmetry principle. According to the HBA, our reasons against harming are much stronger than our reasons in favor of benefitting. So an act that causes roughly equal harms and benefits might still be morally impermissible.

This, Mogensen and MacAskill think, is the situation with identity-affecting acts. “Given the usual non-consequentialist asymmetries, the lives saved [by identity-affecting acts, in expectation] do not straightforwardly compensate for the deaths caused, and neither do the benefits of buying groceries. So it seems the non-consequentialist should regard driving to the supermarket, and almost any other identity-affecting action, as immoral” (3).

Of course, many people will still suffer many harms even if you do nothing at all. But in this case you’ll merely be allowing the harms rather than causing them. By the DDA, your reasons against the former are much weaker than your reasons against the latter, so inaction is strongly preferable to action. Hence paralysis—or, more specifically, doing one’s best not to perform potentially identity-affecting acts—seems to be morally required.

Against Asymmetry Strength

The Paralysis Argument relies on an important assumption about the HBA and the consequences of identity-affecting acts. In order for the argument to work, one has to assume that, given the actual ratio of expected benefits to expected harms, the HBA posits a strong enough harm-benefit asymmetry that identity-affecting acts come out impermissible. I’ll call this claim Asymmetry Strength.

Mogensen and MacAskill don’t defend Asymmetry Strength, but the principle is far from obviously true. In fact, I think it’s quite dubious. Here, in brief, is an argument against it:

  1. The expected harms and benefits one causes by having a child of one’s own are comparable in size and kind to the expected harms and benefits one causes by performing an identity-affecting act.
  2. Therefore, identity-affecting acts are permitted by the HBA just in case having children of one’s own is permitted.
  3. Thoughtful, scrupulous and well-informed deontologists who have given the issue serious thought overwhelmingly believe that having children of one’s own is morally permissible.
  4. If thoughtful, scrupulous and well-informed deontologists have overwhelmingly come to the same conclusion about some question of deontological morality to which they’ve given serious thought, this conclusion is probably correct.
  5. So having children of one’s own is probably permitted by the HBA.
  6. Hence identity-affecting acts are probably also permitted by the HBA.

In the above, “the HBA” means “the objectively correct version of the HBA”. If no version of the HBA is correct, then “the HBA” means “the principle that best represents deontologists’ views about harm-benefit asymmetries”. (I take it that Mogensen and MacAskill are skeptical of the HBA in any form, so their argument is addressed at deontologists’ commitments and what follows from them.)

Let me briefly defend each of the above premises.

In defense of premise 1. This is true because having a child is essentially an identity-affecting act. By modifying your behavior in small or large ways, you might have caused a different child to exist than the one you in fact create. You have no advance knowledge of which person you’d create under which circumstances, and minimal advance knowledge of what the lives of any of these potential people will be like. But whatever choice you make, you’ll have partly caused all your child’s experiences and actions, and all the experiences and actions of their descendants. 

One might object to this premise by pointing out that one can usually predict or control certain aspects of one’s own child’s future, at least to a greater degree than one can predict or control the fate of a stranger’s child. Assuming you’re healthy, stable and well-off, for example, you can be reasonably confident that your child will inherit some of your good genes and benefit from the privileged upbringing you plan to provide. Under these circumstances, your child might be expected to enjoy a relatively high benefit-to-harm ratio. 

You can’t say the same for the stranger en route to the grocery store. For all you know, he might turn out to be a struggling single parent or an abusive alcoholic. So perhaps conceiving a child isn’t morally equivalent to a generic identity-affecting act.

I have two replies to this objection, one weaker and one stronger. 

First, the weaker reply. It’s doubtful that one can be all that confident about the character of one’s future child’s life. You might be justifiedly optimistic about a few things, like your ability to meet basic childhood needs. But most people you encounter in the world—that is, most possible targets of your identity-affecting acts—will probably be able to meet their children’s basic needs too. So there’s no major asymmetry here.

Most of the things your child does, and has done to them, will happen later in life. The biggest harms and benefits will accrue in adulthood, resulting from circumstances around work, close relationships, parenting, health and so on. You can’t meaningfully predict or control much of this. While your epistemic position with respect to your future child might be somewhat better than your position with respect to a stranger’s future child, then, this doesn’t appear to be the sort of enormous qualitative difference one would need to refute premise 1.

In any case, the stronger reply is that none of this really matters. In ten thousand, a million or a billion years, whatever small head start you managed to give your child will be washed out by countless downstream consequences. You have effectively zero information about these consequences and their moral weights. In the long run, then, your epistemic situation with respect to having a child is the same as your epistemic situation with respect to affecting the identity of someone else’s child: in both cases, you can reasonably expect to cause both many benefits and many harms, but you’re completely ignorant about further specifics.

In defense of premise 2. I assume here that there’s no special permission for having children of one’s own regardless of harm-benefit considerations. Perhaps one could argue that there is such a permission, although it’s unclear why this would be true; if you knew in advance that your future child would be Hitler, or that they’d live a life of constant and uncompensated pain, it seems highly plausible that you’d be morally obligated not to have the child on account of the harms you’d thereby cause. 

In defense of premise 3. Anti-natalist deontologists exist, but I take them to be a small minority (of deontologists in general, and deontologists who have thought hard about the ethics of procreation in particular). 

My evidence for this claim is partly anecdotal: I’ve met many people sympathetic to deontology, but none who were avowed anti-natalists, as far as I know. Looking at the philosophical literature suggests the same conclusion. There are relatively few published works defending anti-natalism, a large share of these by David Benatar (e.g. (Benatar 2006), (Benatar 2011), (Benatar 2022)), but many critical responses by a variety of authors. A number of these responses accept the HBA-like asymmetry principle at the heart of Benatar’s anti-natalist argument. So there’s certainly logical space for versions of deontology which accept the HBA but reject anti-natalism, and these versions of deontology seem to be the most popular by a large margin, even among experts on procreation ethics.

In defense of premise 4. The idea is straightforward. Suppose that is some question of deontological morality. Since lots of deontologists have thought about over the course of many generations, it’s likely that the most plausible positions and arguments have been staked out.[2] Since these deontologists are thoughtful, they’ve carefully considered the merits of the various available options. Since they’re scrupulous, they care a great deal about identifying and acting on the demands of morality. Since they’re well-informed, they possess most of the facts relevant to Q. Under these conditions, it would be surprising if deontologists collectively arrived at the wrong answer. I claim that all this is the case with respect to the ethics of procreation.

Objection I: There’s a strong cultural bias in favor of having children. Isn’t it likely that deontologists have internalized this bias and are just looking for ways to rationalize it?

Reply: There’s also a strong cultural bias in favor of eating meat, but this hasn’t stopped many deontologists from concluding that vegetarianism or veganism is morally required. In general, since Kant argued that lying is always wrong, deontologists have been quite willing to defend the unpopular or counterintuitive conclusions that their principles seem to lead to.

Objection II: If the premise were true, wouldn’t it immediately refute the Paralysis Argument, since most deontologists also don’t believe that everyday actions are impermissible?

Reply: No, because deontologists haven’t been thinking carefully about the Paralysis Argument and its constituent concepts for a long time. Most surely haven’t considered the issue at all; Mogensen and MacAskill’s notion of an identity-affecting act, and their argument that we commit many such acts all the time, aren’t familiar or obvious. So premise 4 doesn’t apply.

Objection III: Sometimes groups of smart people collectively arrive at wrong answers when the relevant facts are particularly complex or hard to ascertain. Perhaps that’s the case here.

Reply: There are admittedly some sources of complexity around these issues. Creating a life means bringing about many harms and benefits, some obvious and some very subtle. Holding the whole picture in mind at one time is hopeless. Even getting a view clear enough to gauge the general tilt of the moral balance seems quite difficult. It certainly seems possible for intelligent and well-meaning people to go wrong here.

Notably, though, this isn’t at all the standard anti-natalist critique of pro-natalism. Nobody is suggesting that mainstream deontology has gone wrong by incorrectly performing a delicate moral calculation. Rather, anti-natalists generally hold that creating a life is impermissible in principle for simple and obvious reasons.[3] If pro-natalists are making a mistake, then, the story would seem not to be one of error in the face of overwhelming complexity.

The last two lines of the argument are simple consequences of the preceding premises. So this concludes my defense. Asymmetry Strength is probably not true.

Why not paralysis?

The above argument, even if sound, is at best a “non-constructive” refutation of the Paralysis Argument. It suggests that the paralytical conclusion is false without explaining where the reasoning goes wrong. From a practical viewpoint, it’s useful for altruists to learn that this appeal to deontologists isn't likely to work. But intellectually speaking, it would be nice to say more if we can.

I won't try to definitively pinpoint the problem with the Paralysis Argument. I’m not a deontologist myself, so I won’t presume to say what the optimal response is for those who have thought harder about the correct formulation of deontological views. Instead I’ll just point out that deontologists have several plausible options for resisting paralysis.

In light of the discussion above, it’s not surprising that some of these options have parallels in deontological responses to anti-natalism. Indeed, the Paralysis Argument shares a great deal of DNA with Benatarian anti-natalism: both rely on the HBA to argue that seemingly innocuous actions lead to impermissible harms. (The difference, of course, is that the former is used as a reductio of neartermist deontology, while the latter is defended in earnest. But it’s not hard to imagine critics presenting anti-natalism as an absurd implication of the HBA if hard-nosed, bullet-biting deontologists hadn’t gotten there first.)

Option 1: Maintain that the benefits caused by identity-affecting acts (usually or often) compensate for the harms, so identity-affecting acts are (usually or often) permissible.

The typical life involves lots of good and lots of bad. What the expected harm-benefit ratio looks like, though, is far from obvious. I don't think it's out of the question that the benefits vastly outmatch the harms in number and size. If they do, then identity-affecting acts may be permissible even if the HBA posits a very large moral asymmetry between harming and benefiting.

How to think about the harm-benefit ratio depends not only on the facts of life, but also on one’s theory of benefits and harms. Some accounts seem relatively friendly to the view under consideration. For instance, causal theories of harm say that harming someone is bringing about things like “pain, early death, bodily damage, and deformation” (Harman 2004, 92). A plausible version of this view might then identify benefitting someone with bringing about freedom from pain, early death, bodily damage and so on. Since many people live long and mostly pain-free lives (which also contain many pleasures), it follows that the benefits caused to such people far outnumber the harms. 

If something like this package of views is correct, identity-affecting acts may be less worrying than Mogensen and MacAskill think. While it’s true that performing such an act involves a small chance of causing uncompensated harms, the probability that the harms one causes will be morally outweighed by the benefits might be much larger. In this case, identity-affecting acts would be permissible even under a demanding version of the HBA.

In the anti-natalism literature, Smuts (2013) defends this sort of approach in reply to Benatar.

Option 2: Replace agent-neutral assessments of harm with first-person assessments.

This is an independently attractive view, since it’s plausible that you yourself are in the best position to assess the goodness or badness of your experiences and circumstances, and by extension to determine whether your life was worth bringing about. By contrast, agent-neutral assessment puts us in the embarrassing position of telling some people that it was impermissible to create them because of the harms they’ve been forced to endure, even if they don’t think much of these harms themselves and are quite happy to exist.

If this view is correct, then identity-affecting acts may cause considerably fewer, less severe or less strongly disfavored harms than the agent-neutral version of the HBA predicts. 

In the anti-natalism literature, defenses of this approach include (Hauskeller 2022) and 
(Overall 2022). The debate in the theory of well-being between “objective list” views and various kinds of subjectivism is related (Crisp 2021).

Option 3: Adopt different views about events, causation and reasons against harm.

Eric is driving to the bank, which he intends to rob. Unaware of this, you cut Eric off in traffic and delay his arrival by a few minutes. He robs the bank later than he otherwise would have and the robbery unfolds somewhat differently.

There’s a sense in which you caused the robbery that occurred: if you hadn’t cut Eric off, a different sequence of events would have unfolded at a different time. But it’s clear that you didn’t harm the victims of the robbery, and that the delay you caused was permissible. (Even if you’d known in advance what the outcome would be, you’d have had no moral reason to stay out of Eric’s way.) The general upshot seems to be that, if your act merely makes a difference to the manner of occurrence of an event-type that would have occurred anyway, you need not count as having harmed the people affected.

Perhaps deontologists can take a similar tack with respect to identity-affecting acts. The stranger on the road would have conceived a child whether you’d been on the road or not. And that child would have experienced a series of harms and benefits probably not too unlike those experienced by the child whose conception you caused. The fact that the two children are numerically distinct need not imply anything in particular about your moral reasons or responsibility for harm. If causing a numerically different robbery to occur doesn’t entail that you harmed the victims, it’s unclear why causing a numerically different person to exist should entail that you harmed that person.

Coda: the longtermist gambit

As I mentioned at the outset, Mogensen and MacAskill acknowledge one possible way for deontologists to escape paralysis: by dedicating themselves to the good of the long-term future, which may be the only way to realize benefits large enough to compensate for the harms caused by identity-affecting acts.

If the moral framework of the Paralysis Argument is right, though, longtermist strivings aren’t a viable alternative to inaction. By Mogensen and MacAskill’s own lights, whatever longtermist interventions one pursues, the chances of bringing about a very bad long future will be unacceptably high. Paralysis remains the only permissible option.

The basic point here is a familiar one. Any act that increases the probability of long-term human survival also increases the probability of “s-risk” scenarios featuring astronomical amounts of suffering. Given a strong HBA, our moral reasons against taking any such act are incredibly powerful. 

As Mogensen and MacAskill write, “Even if there is some [probability] threshold below which you can disregard the risk of killing one person, it seems absurd to suppose that the same threshold should be used for actions that have the potential to kill one million people” (8). All the more so for actions with the potential to cause galactic-scale suffering over billions of years. If causing any amount of increased s-risk is permitted, the additional risk would have to be infinitesimal compared to the increased chance of an extremely good long-term future.

It’s beyond belief that any current or foreseeable longtermist intervention has such a favorable risk profile. Indeed, if the Paralysis Argument is right, then the potential for s-risks combined with the HBA presumably imply that we should avoid future-prolonging acts at all costs. 

As I've argued, deontologists needn't accept all the assumptions the Paralysis Argument requires. But for those who do accept them, Mogensen and MacAskill haven’t devised a promising longtermist recruitment tool: they’ve erected a KEEP AWAY sign in red neon, visible from miles out. 

If EA wants to reach deontologists, it needs to do better. I'm not sure it can do worse.

References

Benatar, David. 2006. Better Never to Have Been: The Harm of Coming Into Existence. New York: Oxford University Press.

Benatar, David. 2011. “No life is good.” The Philosophers’ Magazine 53, 62-66.

Benatar, David. 2022. “Misconceived: Why these further criticisms of anti-natalism fail.” Journal of Value Inquiry 56, 119-151.

Crisp, Roger. 2021. “Well-being.” In The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2021/entries/well-being/>.

Harman, Elizabeth. 2004. “Can we harm and benefit in creating?” Philosophical Perspectives 18, 89-113.

Hauskeller, Michael. 2022. “Anti-natalism, pollyannaism and asymmetry: A defence of cheery optimism.” Journal of Value Inquiry 56, 21-35.

Miller, Lantz Fleming. 2021. “Kantian approaches to human reproduction: Both favorable and unfavorable.” Kantian Journal 40, 51-96.

Mogensen, Andreas and William MacAskill. 2021. “The paralysis argument.” Philosophers’ Imprint 21, 1-17.

Overall, Christine. 2022. “My children, their children, and Benatar’s anti-natalism.” Journal of Value Inquiry 56, 51-66.

Smuts, Aaron. 2013. “To be or never to have been: Anti-natalism and a life worth living.” Ethical Theory and Moral Practice 17, 711-729.


[1] According to the 2020 PhilPapers survey, for instance, about 30.5% of philosophy faculty at leading institutions accept or lean toward consequentialism, while 32% accept or lean toward deontology. It’s likely that deontology is even more prevalent in the general population, since most people are religious, and most mainstream religions prescribe deontological ethical systems.

[2] Discussion of the permissibility of procreation in a deontological framework goes back at least to Kant; cf. (Miller 2021).

[3] Namely, for Benatar, because an absence of benefits is morally indifferent while an absence of harms is morally good, so a life involving any harms at all is worse than nonexistence.

33

0
0

Reactions

0
0

More posts like this

Comments4
Sorted by Click to highlight new comments since: Today at 5:51 AM
xuan
2y11
1
0

As someone who leans deontological these days (and contractualist in particular), I really appreciated this post! 

Honestly quite baffled by the original argument, and it definitely makes me less inclined towards longtermist philosophy and the thinking associated with it.  To me it's clear that identity-causing acts do not cause harm in a way that one is responsible for it, in the same way that unintentionally delaying a robbery does not cause harm in a way that one is responsible for it, so the paralysis argument feels extremely weird to me.

I think there are good arguments for doing a lot more than we currently do to prevent the foreseeable suffering of future people, but this is not one of those arguments, much less an argument for something like strong longtermism.

Do you think working to reduce s-risks instead of extinction risks is compatible with the arguments they make? That would still count as longtermist.

Another neglected way out is to precisify our notion of causality used in DDA (and in ordinary language) so as to include conceptions of explanation and credit attribution, thus exempting liability for random effects. MacAskill and Mogensen come close to contemplating this point in section 3.3, but then they focus on the Arms Trader example, which is close to a strawman here, and conclude:

We grant that it sometimes sounds wrong to say that you do harm to another when you initiate a causal sequence that ends with that person being harmed through the voluntary behavior of some other agent. But so far as we can see, this is entirely explained in terms of pragmatic factors like those discussed earlier: that is, in terms of conversational implicatures that typically attach to locutions associated with the ‘doing’ side of the doing/allowing distinction.

The problem with the voluntary behavior of others is not that it would necessarily exempt you of responsibility, but that it would often make your action causally irrelevant. We can say “Agent X’s action a caused event e” is ambiguous between: 

  1. X’s action belongs to the causal chain that led to e, and
  2. in addition to (i), increased the probability of e happening.

(i) is not a very useful notion of causality – basically every state of the world causes the next states (in the corresponding lightcone), because every event has repercussions.

Thus when we say that carbon emissions (caused climate change) caused floods in Lisbon in the last few days, we are not stating the obvious fact that, because of the chaotic nature of long-term climate trends, any different world history would have implied distinct rain patterns. We are rather saying that carbon emissions (and global warming) made such extreme events more likely. Also, this is not straightforwardly connected to predictability, as something might be hard to predict, but easy to explain in hindsight.

                It’s kind of intuitive that we normally use a more refined notion of causality in practical reason; so, though we might blame an arms trader, we don’t even consider blaming all supply chains that made some murders possible. Thus, when we say that all of my actions will cause the identity of some future people, we are talking about (i). But the relevant notion of causality for DDA is (ii); in this sense, I may cause the identity of some future people by making some genetic pools more likely than their alternatives (for instance, by having kids, by working with fertilization, etc.). So, my mother's school teacher didn't cause my birth in anyway; my mother's marrying my father did it, though.

Thanks for writing this!

Another possible response is that ahead of time, each possible contingent individual may have an extraordinarily weak claim against you for possible harms to them, because they almost certainly won't exist. But I'd guess this isn't enough to capture the ex ante badness of bringing into existence an unknown individual who will probably have a bad life (e.g. factory farmed animals), so one of your other options or something else seems necessary anyway. Also, it may lead to some pretty odd dynamic inconsistency or other seemingly irrational behaviour like trying to avoid finding out who will be harmed in cases of many individuals at small individual risk of harm but large collective risk of at least one individual being harmed.

Curated and popular this week
Recent opportunities in Building effective altruism