The Total Collapse of the Moral Project (and What That Means for Effective Altruism)

Introduction:

Has the “moral project” – the attempt to find objectively right actions and build a better world – collapsed under scrutiny? Recent critiques suggest that our moral decision-making is fundamentally undermined by deep uncertainty about consequences, paradoxes of infinite outcomes, and the realization that morality itself might be an evolved strategy rather than a metaphysical truth. This article explores these challenges and their implications for Effective Altruism (EA). We draw on philosophical arguments and empirical research to examine: (1) the epistemic objection (or “cluelessness”) – the idea that we’re effectively flying blind when predicting the results of our actions; (2) the infinite universe problem for consequentialist reasoning; (3) morality’s origins as an evolutionary strategy rather than divine or intrinsic truth; (4) prospects for engineering morality in the future via technology; and (5) the collapse of moral bindingness – why moral realism struggles to find a foundation in a post-theistic world. We’ll then consider what all this means for the Effective Altruism movement.

1. Epistemic Objection and “Cluelessness” about Consequences

One major challenge to moral decision-making is our severe uncertainty about long-term consequences. Philosophers like James Lenman and Hilary Greaves argue that even if we try to do good, we remain clueless about an action’s ultimate impact on the far future . Small causes can have butterfly-effect ramifications that swamp the immediate effects. For example, Lenman offers a striking thought experiment: imagine a doctor in the distant past saved the life of a pregnant woman, who then went on to be an ancestor of Adolf Hitler . An act that seemed clearly positive (saving a life) would, through the ripple of history, lead to tremendous evil – “the seemingly-good act turned out to have actually-disastrous overall consequences” . This isn’t just a fanciful scenario; it generalizes to nearly any action. As Greaves observes, even a trivial choice like whether to drive to work today can unpredictably change who will exist in the future (by slightly shifting the timing of other people’s interactions and conceptions), which in turn “snowball[s] into an ever-more-different future” that we cannot forecast .

The upshot is that we have no reliable idea of the total consequences of our actions, positive or negative. Short-term outcomes are overwhelmed by unforeseeable long-term effects . If a moral theory (like classical utilitarian consequentialism) says the right action is the one with the best overall consequences, then we face a disturbing implication: “we have no idea what we really ought to do—our reasons for action lie beyond our epistemic grasp.”  In Lenman’s formulation, this is the Epistemic Objection to consequentialism: a theory demanding we maximize overall good fails to be action-guiding if we can’t know the future . An ethical theory that only gives us reasons we can’t possibly discern would be practically useless .

Effective Altruists concerned with impact – especially those focused on the long-term future – grapple with this cluelessness. Some longtermists respond by seeking “robustly positive” interventions where the sign of effects remains positive across many scenarios . For instance, reducing existential risks (events that could permanently curtail humanity’s future) is argued to have positive expected value despite our uncertainty . However, even this response concedes a narrowed ambition: outside of a few clear cases like preventing extinction, we might have no clue how to compare the overall good of different actions. It’s unnerving that saving a life now, helping the global poor, or other near-term good deeds could conceivably have unknowably bad long-term effects that outweigh the good. The epistemic worry undermines confidence in any straightforward “do X because it will lead to the best outcome” claim, demanding much more humility about our ability to do good. As we’ll see later, this pushes Effective Altruists to rely on heuristics, risk-neutral frameworks, or focus on worst-case avoidance – because the direct calculation of expected value becomes highly uncertain when future consequences are opaque .

2. The Infinite Universe Problem (Infinite Ethics and Pascal’s Mugging)

Even if we set aside cluelessness and assume we could roughly predict outcomes, a different theoretical bombshell awaits consequentialist reasoning: what if the universe (or multiverse) is infinite? In an infinite cosmos, traditional ways of aggregating good and bad outcomes start to break down. Nick Bostrom highlights this as the problem of “infinitarian paralysis.” He notes that standard aggregative utilitarian theories would imply that “if the world is canonically infinite then it is always ethically indifferent what we do” . Why? Because any action might lead to an infinite amount of good and an infinite amount of bad in total (there being infinitely many beings or future generations affected). If you try to sum up the net impact, you get ∞ – ∞, an undefined quantity. In effect, every action could have the same infinite expected value, making them impossible to rank. Bostrom points out the horrific implication: if literally nothing changes the total infinite sum, then it would be “ethically indifferent” whether we cause another Holocaust or prevent one . A moral view that renders saving lives versus committing atrocities as value-equivalent has clearly hit rock bottom absurdity – “if any non-contradictory normative implication is a reductio ad absurdum, this one is”, as Bostrom dryly observes . And importantly, this is not a purely abstract scenario; modern cosmology suggests the universe may well be spatially or temporally infinite, meaning any ethical theory unable to handle infinite contexts is on shaky ground . An “actual world” that is infinite in size (endless galaxies, infinite future time, etc.) is plausible enough that ethical theories can’t just ignore it .

This leads into the field of infinite ethics in decision theory. Researchers have attempted fixes like comparing infinities using advanced mathematics (e.g. non-standard analysis or lexical ordering of infinities  ), but no consensus solution exists. The measure problem (how to compare two infinite sets of welfare) might mean that a strict utilitarian calculus simply has no well-defined answer in an infinite universe . For Effective Altruists, this could mean that if we live in an effectively boundless world, our usual cost-benefit analyses might all be undefined – a form of “paralysis.” One proposed way out is to focus on differences: for example, “maximize the probability of avoiding an infinite bad and achieving an infinite good” , and only then use finite comparisons as tiebreakers. Bostrom calls one such approach “Extended Decision Rule (EDR)”, which first compares the chance of infinite stakes, then falls back to finite expectations  . This is complex territory, but the very need for exotic decision rules underscores how standard moral reasoning may falter in an infinite context.

A related quandary is Pascal’s Mugging, a famous thought experiment about extremely low-probability, extremely high-value outcomes. It demonstrates how even without literal infinities, an unbounded utility function leads to absurd conclusions. In Pascal’s Mugging, a stranger claims they will bring astronomical benefits (like 10^50 happy lives in the future) if you just hand them $100 now – though there’s only a 0.000001% chance they’re telling the truth. Traditional expected value calculation says the expected payoff of giving the $100 is 0.000001% × 10^50, which is 10^44 units of utility – hugely positive – so it seems you ought to hand over the money. Worse, if someone upscales the reward to an even more immense number (say googolplex lives), then no matter how tiny the probability, the expected value can be made arbitrarily large. “Some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes,” as one summary puts it. “Hence the agent should focus more on vastly improbable cases with implausibly high rewards; this leads first to counter-intuitive choices, and then to incoherence as the utility of every choice becomes unbounded.”  In other words, a naive EV-maximizer gets “mugged” by outlandish hypothetical promises: you’d give away your resources for practically any wild claim, as long as it dangles a sufficiently huge payoff. This is deeply unsettling for effective altruists who often rely on expected value reasoning to compare interventions. It suggests a need to temper pure EV calculations with additional principles (like bounding the utility, using heuristics, or requiring a minimum probability floor) to avoid being swayed by one-in-a-trillion chances of enormous benefit. Otherwise, cause-prioritization could be hijacked by speculative scenarios (a classic example might be someone claiming that funding their project could have a 0.000…1% chance of creating a utopia for exponentially many future beings – should that trump all more modest but likely benefits? Most would say no, yet a strict EV framework struggles to justify why not).

In sum, the infinite (or astronomically large) outcome problems show another collapse in our moral reasoning framework: if reality is infinite or if we permit unbounded utilities, our calculations yield indeterminate or absurd prescriptions. For EA, this underlines the importance of methodological caution: one must incorporate ideas like priors, diminishing returns, or decision-theoretic rules that avoid these pitfalls (for instance, by placing skepticism on extremely speculative high-value claims, or adopting satisficing-type criteria instead of pure maximizing). It’s a reminder that bigger numbers aren’t always better when they come with overwhelming uncertainty or logical paradoxes.

3. Morality as an Evolutionary Strategy (Not a Metaphysical Truth)

Is morality something we discover (like mathematicians discovering truths) or something we invent/adapt (like a tool)? A growing body of research in evolutionary anthropology, game theory, and cultural psychology suggests that morality is best understood as a set of adaptive strategies rather than external commands. In this view, our moral intuitions and norms evolved (biologically and culturally) because they helped our ancestors solve recurrent social problems. As Oliver Scott Curry and colleagues put it, “morality is best understood as a collection of biological and cultural solutions to the problems of cooperation recurrent in human social life.”  Rather than being arbitrary, human moral systems consistently encourage behaviors that foster cooperation and social harmony – e.g. caring for family, forming loyal coalitions, reciprocating favors, being brave in group defense, deferring to authority to manage conflict, distributing resources fairly, and respecting others’ property rights . These behaviors correspond to what Curry calls seven types of morality (kin altruism, group loyalty, reciprocity, heroism, deference, fairness, and property rights), each tied to a distinct form of cooperation . Crucially, these appear to be human universals: in a survey of 60 societies, Curry et al. found that these seven cooperative behaviors were almost always considered morally good wherever they arose  . This cross-cultural uniformity suggests that, regardless of culture or religion, people everywhere recognize and value forms of cooperation that solve similar social dilemmas. Morality, in essence, “draws on the theory of non-zero-sum games to identify distinct problems of cooperation and their solutions” . Behaviors that yield mutual benefit or mitigate conflict tend to get encoded as moral “rules,” because groups that follow those rules prospered over evolutionary time.

From an evolutionary perspective, then, “morality is a form of cooperation” . Michael Tomasello, a developmental and comparative psychologist, argues that human morality emerged as humans became ultra-cooperative: we evolved instincts for sympathy, fairness, and norm-enforcement to work better in groups. “Human morality arose evolutionarily as a set of skills and motives for cooperating with others,” Tomasello and Vaish write, which then develop in each child through both natural maturation and cultural learning . In young children you can even see a two-step development: first, toddlers exhibit “second-personal” morality (helping and sharing with specific others, showing empathy), and later, as they become cognizant of group norms, they adopt “agent-neutral” morality (abstract rules and an impartial sense of right and wrong) . This ontogeny might recapitulate phylogeny: our species first evolved basic social instincts for partnership and gradually layered on group-wide norms as we formed larger societies. Evolutionary game theory gives a theoretical backbone to this, showing how strategies like reciprocal altruism, kin selection, and hawk-dove peace deals can become stable and prevalent in a population. Over thousands of generations, moral emotions (like guilt, gratitude, righteous anger) likely arose as internal enforcement mechanisms, locking in those cooperative strategies by rewarding us for altruism and punishing us (psychologically) for cheating or harming others.

Anthropologists and cultural evolutionists like Joseph Henrich and Michael Muthukrishna extend this story into the realm of cultural norms. Humans, uniquely, don’t rely on genetic evolution alone – we transmit huge amounts of learned information across generations. This cultural transmission can itself undergo a selection process (often dubbed “cultural group selection”). Societies with norms that encourage trust, coordination, and suppression of selfishness tend to outcompete societies riven by mistrust and conflict . Over time, you’d expect a spread of moral systems that sustain larger-scale cooperation. Henrich’s research, for example, has shown that communities with more market integration or world religions (which often encourage prosocial behavior with big gods monitoring actions) tend to have stronger norms of fairness and charity. Different environments can produce different moral emphases – but always tethered to the underlying function of solving social challenges. In other words, morality is an evolved equilibrium: a set of norms that proved stable and advantageous under certain conditions (much like a successful strategy in an evolutionary game) . This view undermines the idea that morality comes from some transcendent source; instead, it naturalizes morality as an emergent property of human social evolution. As one review notes, there is now a “thriving interdisciplinary endeavour” investigating morality across anthropology, psychology, neuroscience, etc., and a common theme is “the function of morality is to promote cooperation.”

For Effective Altruists, recognizing morality’s evolutionary basis carries a few implications. First, it can be humbling: our moral intuitions (what we feel is right or wrong) are not infallible or sacrosanct – they are adaptive heuristics shaped for ancient environments. They might be prone to systematic biases (e.g. favoring kin over strangers, or present over future people) that made sense historically but conflict with our current ethical aspirations (such as impartial altruism). EA often emphasizes “overcoming bias” in our altruistic efforts – here we see that some biases are literally built into our moral psychology by evolution. Second, it suggests that moral progress (or at least moral change) is possible by leveraging the same forces that shaped morality originally. Just as cultural evolution once expanded our circle of concern (e.g. the rise of norms against slavery or more recently against factory farming), we might guide future moral evolution deliberately (more on this soon). Lastly, this perspective can be liberating: if there is no mysterious external moral truth written into the fabric of the universe, then humanity has the responsibility – and freedom – to decide what values we want to hold and to shape our society accordingly. Morality becomes a project, one of “creating stable cooperative equilibria that reflect our values and compassion.” That project can be informed by science: by understanding what kinds of norms lead to human flourishing and peaceful cooperation, we can try to implement those norms more widely. In short, knowing that morality is an evolved strategy doesn’t debunk morality – it clarifies what morality is, and it invites us to use this knowledge to possibly improve the moral systems we live by.

4. Future Engineering of Morality – AI, Neuroscience, and Genetic Interventions

If morality is an evolved trait, might we someday directly engineer moral behavior or moral thinking, in the same way we engineer other capacities? This prospect is both exciting and fraught with ethical controversy. However, researchers in fields like neuroethics, AI alignment, and bioengineering have begun exploring whether deliberate moral enhancement is feasible – and whether it could help address global risks. Ingmar Persson and Julian Savulescu, for instance, argue that our natural moral dispositions might be inadequate for the challenges of modern technology, and that we have a “duty to try to develop and apply” biomedical means of moral enhancement to avoid catastrophe . They suggest that methods such as pharmaceuticals, neurological interventions, or genetic modifications could potentially “strengthen the central moral drives of altruism and a sense of justice.”  In their view, traditional moral education and institutions might not be enough to prevent issues like climate change, global conflict, or misuse of powerful tech – we may need a boost in our capacity for empathy and fairness to meet the demands of a globalized world .

What might moral bio-enhancement look like in practice? Some proposals remain speculative, but they are grounded in current science. For example, experiments have shown that certain hormones and brain modulators can influence moral behavior. One line of research has looked at oxytocin, the “bonding hormone”: administering oxytocin can increase people’s trust, generosity, and empathy (at least toward in-group members) . This has led to suggestions that in controlled settings, oxytocin or similar compounds, combined with therapy, might help inculcate pro-social attitudes . Other studies point to serotonin – a neurotransmitter linked to mood and social behavior. Increasing serotonin (e.g. via SSRIs) has been found to reduce aggression and make people more averse to harming others, as well as somewhat more inclined toward fair outcomes . Even testosterone, often associated with aggression, when modulated, might affect dominance behaviors and fairness (some research indicates high testosterone can reduce empathy, so lowering it might do the opposite) . Beyond chemicals, there’s interest in brain stimulation techniques: using devices like transcranial magnetic stimulation (TMS) or transcranial direct current stimulation (tDCS) to directly influence neural activity. Remarkably, neuroscientists have shown that you can literally alter a person’s moral judgments by targeting specific brain regions. For instance, a team at MIT used TMS to disrupt the right temporo-parietal junction – an area of the brain crucial for thinking about others’ intentions – and found that this impaired subjects’ ability to make normal moral judgments (the subjects became more outcome-focused, judging attempted harms as less blameworthy)  . As one researcher noted, “to be able to apply a magnetic field to a specific brain region and change people’s moral judgments is really astonishing.”  It underscores that our sense of right and wrong is rooted in brain processes that can, in principle, be manipulated.

While these interventions are not yet the stuff of everyday life, they foreshadow a future where moral cognition is malleable. Imagine, for example, a technology that could reliably suppress impulses of violent anger or enhance feelings of empathy toward distant strangers. Could that be used to reduce crime or increase charitable action? Some have even envisioned an AI-based “God Machine” – a concept where an advanced AI monitors human behavior and intervenes (perhaps via brain-linked devices or other means) to prevent us from acting on terribly immoral intentions, essentially enforcing moral behavior for the greater good. This of course raises enormous ethical questions about freedom, coercion, and who defines “moral.”

Effective Altruists are particularly interested in how emerging technologies (like advanced AI or gene editing) might influence the future of moral progress. AI alignment researchers ask not only “How do we make AI obey human values?” but also “Which values should we have AI maximize, and are our present values good enough?” There’s an interplay between AI and human morality: a superintelligent AI might be used to nudge human behaviors at scale (for better or worse), or conversely, AI might force us to confront the inconsistencies in our moral codes when we try to program them. For example, if we attempt to encode a moral rule like “do no harm,” we immediately face all the classic dilemmas (what about harming one to save five? what counts as harm?). In doing so, we might need to explicitly decide on trade-offs that human societies normally handle via messy, evolved intuitions. Some ethicists suggest this process could actually clarify our moral thinking, essentially using AI as a tool to simulate consequences and test our principles for coherence. On the biotechnological front, if CRISPR or other genetic tools eventually allow us to select for personality traits, should we select for more altruistic, compassionate dispositions? Would it be ethical to ensure the next generation has, say, a lower propensity for psychopathy or higher baseline empathy? These are no longer pure sci-fi questions – preliminary research in behavioral genetics has identified some correlates of pro-social temperament, though it’s early days for any “morality gene.”

It’s worth noting that not everyone agrees moral bioenhancement is desirable or even safe. Critics worry about the loss of autonomy (a “Brave New World” scenario of chemically complacent citizens) and the risks of unintended side effects in such interventions . There’s also the philosophical question of what it means to be “more moral” – whose standard do we use? Persson and Savulescu focus on increasing altruism and justice, but morality is pluralistic and context-dependent; boosting one aspect (say, empathy) could have downsides (too much empathy might bias you towards helping individuals in need nearby over pressing but faceless problems like climate change). These are important debates. For our purposes, the key point is that morality is increasingly seen as something we might deliberately shape using science and technology. The traditional view that moral character can only be shaped by upbringing and personal choices is being challenged by evidence that biology can influence ethics, and vice versa.

For Effective Altruism, which is very future-oriented, moral enhancement could be a leverage point: if we could figure out how to foster a wiser, more compassionate humanity (either through better institutions, better education, or someday, safe bio/neuro-tech), that might dramatically increase the amount of good in the world. Some proponents even claim global catastrophic risks might demand moral enhancement – for example, to ensure international cooperation to avoid wars or to get populations to care about long-term ecological impacts, we might need people on average to have a stronger moral inclination toward global welfare . This is a provocative argument: it essentially says our moral evolution hasn’t kept pace with our technical power, and we risk disaster unless we close that gap artificially. Whether or not one agrees, it reframes the moral project as not just understanding ethics, but potentially upgrading our ethical capacities. The collapse of the “moral project” in its traditional sense (finding eternal truths) might give way to a new project: improving how humans enact the ethical values (like compassion and fairness) that we already more or less agree on.

5. The Collapse of Moral Bindingness and the Fate of Moral Realism

Underpinning all the above is a more fundamental philosophical crisis: in a secular, naturalistic worldview, what, if anything, can make moral claims binding or true? Moral realism is the position that there are objective moral facts – facts that hold regardless of what anyone happens to value or believe, similar to mathematical truths. Historically, many cultures grounded moral truths in God or religion (a divine command theory: something is good because God wills it, etc.). With the decline of theism among philosophers and an increased understanding of our evolutionary history, that foundation eroded. Can we find a new foundation, or do we have to accept that in the end “morality is made up by humans”? This debate is vast, but let’s highlight a few key arguments relevant to our discussion.

J.L. Mackie famously argued for what he called the “error theory” of ethics: the belief in objective values is just a widespread mistake. He opens his book Ethics: Inventing Right and Wrong with the blunt claim, “There are no objective values.”  All moral judgments that assume such objective value are therefore false. Mackie supported this with two main points: the argument from disagreement (the sheer variation in moral codes across times and cultures is better explained by them being subjective or socially constructed, rather than everyone trying to latch onto the same objective truth ), and the argument from queerness . The latter is especially influential: if objective moral properties existed, they would be very “queer”, in the sense of metaphysically very strange – utterly unlike any other fact in the universe. They would also require a “queer” faculty for us to detect them (since they’re not empirically observable in the way physical facts are) . Mackie suggests that it’s more parsimonious to think moral values are not ontologically real “out there” but rather human constructs or preferences. In a world guided by science, invoking non-natural properties (like an intrinsic wrongness to murder that exists independent of minds) starts to look unwarranted.

Derek Parfit, on the other hand, is a prominent example of a contemporary philosopher who tried to rescue objective morality without religion. In On What Matters, Parfit argues that there are objective moral reasons which are not reducible to just our whims or attitudes . He attempts a grand convergence of ethical theories – suggesting that the best versions of Kantianism, consequentialism, and contractualism all align on certain key principles (his “Triple Theory”)  . Parfit is a kind of moral realist who believes that when we say “X is wrong,” we are stating a truth apt for reasoned agreement. He rejects the notion that moral truth depends on God’s existence; instead, he likens moral truths to mathematical or logical truths – they are just part of reality, even if they’re non-empirical. As a summary of his view, “Parfit defends an objective ethical theory and suggests that we have reasons to act that cannot be accounted for by subjective [relativist] theories.”  For example, even if a person or a culture didn’t care about the suffering of others, Parfit would say they still have a reason to care – the reason exists stance-independently. In his work, he tried to identify what those reasons are (like principles of universality and avoidable suffering) that any rational being would recognize. Parfit’s work has been hugely influential, and some Effective Altruists take inspiration from it to defend the objectivity of doing good (i.e., that it’s not just our preference to reduce suffering, but truly the right thing to do universally).

However, even Parfit’s ambitious project hasn’t convinced all, and it faces challenges from multiple angles. One challenge is evolutionary debunking arguments. Sharon Street, for instance, argues that if our moral beliefs have been heavily influenced by natural selection (which cares about reproductive fitness, not moral truth), it would be a massive coincidence if those beliefs just so happen to line up with some independent moral reality. Unless one posits that evolution tracked moral truth (which seems odd – why would moral truth influence survival, especially in cases like concern for global strangers or future generations?), the simpler explanation is that our moral beliefs exist because they were fitness-enhancing, not because they are true. This doesn’t prove no moral truths exist, but it casts doubt on our ability to know them if they do  . Another challenge comes from the “moral contingency” that Will MacAskill highlights. In What We Owe The Future, MacAskill examines how different our values could have been under slightly altered historical conditions . He was initially inclined to believe that society’s moral development (toward, say, liberal values or more universal compassion) was a more-or-less inevitable progress – but his research convinced him that chance played a huge role . The abolition of slavery, to take one example, wasn’t guaranteed to happen when it did (or at all); it took a very contingent confluence of economic, ideological, and activist pressures . If history were re-run, we might live in a world today where slavery (or other practices we consider abhorrent) is still common, and people justify it with what they see as “moral truths.” The lesson MacAskill draws is that “today’s values could have easily been different”  – and by extension, there’s no absolute assurance that our current moral convictions are the uniquely correct ones. If we take a Humean or anti-realist stance, this makes perfect sense: values evolve with cultures, there’s no outside guarantee that as we refine our moral view we’re getting “closer to truth” like scientists homing in on physics. We might just be drifting in a space of possible norms, hopefully toward those that make life better, but without a cosmic ruler to measure “better” against except our own minds.

Even within the EA community and its philosophers, there’s an openness to the possibility that moral nihilism (the view that no acts are objectively wrong or right) could be true. MacAskill himself has said he’s about “50/50” on moral realism versus nihilism . The startling implication of nihilism (or some versions of anti-realism) is that if it’s true, then in a sense “nothing matters” in an ultimate objective way – there is no moral law out there saying suffering is bad or happiness is good . But we still very much care; we still have our compassion, our preferences, our project of trying to help others. So what would a “collapsed” moral project mean practically? It might mean accepting that morality has no independent existence apart from what sentient beings care about. In other words, a thoroughgoing naturalist might say: the only source of normativity is our own desires and agreements. We choose to value others’ well-being (perhaps for evolved reasons, but also as a philosophical stance), and so we construct ethical systems to reflect that. On this account, moral realism cannot be salvaged in a post-theistic, naturalistic framework – “the furniture of the universe” simply doesn’t include moral facts.

However, this need not lead to despair or total relativism. Some philosophers (so-called constructivists) suggest that even if there is no view-from-nowhere truth, we can still reason about ethics from shared starting points. We can say, “given that we (most of us) do not want to be tortured, and we empathize with others, it is wrong to torture others,” and this statement can have the force of truth for us. If some agent didn’t care at all about any others, it’s true, moral argument would have no leverage on them – they’d be like an alien with an entirely different set of goals. But within humanity, we share enough of a common nature that moral discourse isn’t futile. We can make if-then moral truths: “if you value humans flourishing, then you ought not commit murder or oppression.” The concept of “moral bindingness” then becomes more about mutual commitment: why should I do the right thing? Because it’s part of the system of values that I actually do, on reflection, endorse – and others endorse – and I want to be consistent to that, and live in a world where we all uphold that. This is a softer form of “bindingness” than the old idea that even a powerful individual or a psychopath has a reason to be moral (the traditional realist would say they do, the anti-realist says only if they care to). It might mean, ultimately, morality doesn’t have an external guarantee. The universe isn’t going to punish us for getting it wrong (except via natural consequences), and it’s on us to choose morality. In a sense, this is what Nietzsche predicted with the “death of God” – the highest values devalue themselves, and we have to become “law-givers” to create meaning in a void. Parfit, interestingly, hoped that secular reasoning could find a firm, universal mountain of morality for us to climb , effectively substituting philosophical insight for divine command. Whether that project succeeds is still hotly debated.

At this point, one might ask: does the collapse of belief in objective moral truth actually change what Effective Altruists should do? If we already cared deeply about reducing suffering, do we stop caring upon realizing moral realism might be false? Arguably not – our compassion and our reason for action can remain, even if we interpret them as stemming from our nature and contingent values rather than an external truth. But it could instill epistemic caution and tolerance: we might be less quick to condemn those who have different value systems as “simply wrong,” recognizing that, but for a flip of the historical coin, we might have had very different values too . It also means we should be open to moral improvement: not improvement in the sense of converging to a predetermined truth, but improvement in the sense of better fulfilling the things we (upon reflection) decide we care about – such as the well-being of all sentient life. In practice, EA’s goals (ending factory farming, preventing extinction, alleviating global poverty, etc.) don’t need moral realism to justify them; they only need a shared desire to avoid suffering and promote flourishing. The loss of “moral bindingness” capital-t Truth might feel like a collapse, but it can be viewed as a liberation to double-down on a pragmatic, evidence-based ethics: doing the most good we can, for no other reason than that we want to and we can.

Conclusion: Implications for Effective Altruism

The arguments above can seem disorienting. If we can’t predict the far future of our actions, if the calculus of doing good breaks in an infinite universe, if morality has natural origins and no guaranteed ultimate truth, where does that leave the Effective Altruism movement and its project of “doing good better”? Here are a few takeaways and potential adjustments for EA:

        •        Epistemic Humility and Robustness: The cluelessness objection urges EA to prioritize strategies that are robust to outcome uncertainty. This means favoring actions that seem overwhelmingly likely to do some good in most plausible scenarios (even if not the absolute theoretical optimum in any one scenario). It also means constant vigilance for unexpected side effects and a willingness to course-correct. In practice, EA already does this to an extent: for example, the focus on existential risk reduction is partly because those actions seem net-positive under a wide range of moral and empirical assumptions. The collapse of naive expected-value calculations pushes for more use of probabilistic reasoning, sensitivity analysis, and humility about our models. We might invest more in learning (research, forecasting, improving our ability to predict) as a crucial meta-intervention, given our current cluelessness.

        •        Handling Fanaticism and Infinite Ethics: EAs should be wary of getting “mugged” by extremely low-probability, high-reward scenarios or being seduced by infinite expectations. This might involve setting up heuristic guardrails (for instance, some have suggested discounting extremely small probabilities or capping utility in calculations to reflect diminishing returns of utility at astronomically high scales). There is active discussion in the EA community about how to handle Pascalian arguments – whether for AI risk, simulation arguments, or bizarre thought experiments. A takeaway is that qualitative considerations (common-sense morality, rights, etc.) shouldn’t be completely thrown out in favor of pure quantitative expected value – especially when the numbers involve huge uncertainties. As Bostrom’s infinitarian paralysis shows, we may need a multi-tier decision rule that gives lexical priority to avoiding infinite (or enormous) bads, but also doesn’t let tiny probabilities of huge good dominate our everyday priorities  .

        •        Embracing a Science of Morality (without Moralism): Understanding morality as an evolved, culturally influenced phenomenon can help EAs communicate and cooperate. It encourages us to meet people where they are, appealing to widely shared moral intuitions (like fairness or harm-care) when promoting causes. It also reminds us that moral progress is possible – just as norms against slavery or dueling emerged, we might see norms against new forms of suffering emerge (e.g. for the welfare of digital minds or wild animals) if we guide the cultural evolution appropriately. The EA community often talks about “moral circle expansion”; the research by Curry, Tomasello, Henrich and others provides a framework for how that expansion can happen via tapping into and broadening our cooperative instincts. Additionally, seeing morality as a strategy counters tendencies toward dogmatism. If we accept that “There is no objective tablet from heaven dictating values”, then we should be open-minded and experimental – in line with moral anti-realism, we can treat ethics more like engineering than like metaphysics. We can ask, “which norms and institutions best achieve the things we care about, such as wellbeing, cooperation, and fairness, given human nature as it is?” and then try to implement those, monitor outcomes, and adjust.

        •        Moral Enhancement and Reflective Improvement: The possibility of influencing moral behavior via technology or social programs should be of interest to EAs. If one of our biggest bottlenecks to solving global problems is insufficient empathy or coordination, then enhancing those capacities could be extremely high-leverage. This might be as prosaic as encouraging mindfulness (which some studies suggest increases compassion) or as sci-fi as investigating oxytocin analogues or neural implants that promote peaceful behavior. Obviously, any such interventions must be handled carefully to avoid misuse or violating personal autonomy. But consider that many effective altruists already voluntarily push their own moral boundaries – for example, trying cognitive debiasing techniques, committing to donation pledges, or even drug-assisted psychotherapy to improve their altruistic motivations. These can be seen as self-directed moral enhancements. On a larger scale, if research into, say, a “morality pill” or an AI assistant that nudges group cooperation yields fruit, EAs would likely weigh the risk/reward of deploying it. Persson and Savulescu’s argument that we may need moral enhancement to navigate existential risks , while controversial, serves as a prompt for long-termist EAs to at least explore what safe forms of moral enhancement could look like. It aligns with the movement’s general ethos of using evidence-based methods to solve human problems – here, the “problem” is that our present-day moral psychology might be inadequate for our future tasks.

Reevaluating What “Matters” Without Moral Realism:

If indeed there is no objective morality, EA might reframe its mission in more subjectivist or intersubjective terms. Rather than claiming “we are doing the objectively most good,” we can say “we are doing what most satisfies the strong, shared ethical intuitions and desires that we (and most people) have, such as reducing suffering and increasing happiness.” This doesn’t actually change the projects much – a malaria net distribution is just as life-saving whether or not you believe saving lives is objectively good or just subjectively very good according to almost everyone’s values. But it tones down the absolutist rhetoric and avoids potential alienation of those who are put off by what sounds like moral dogmatism. It also might make the community more welcoming to moral pluralism: working with people who have different philosophical foundations (humanists, Buddhists, Christians, etc.) as allies, since we don’t insist on one metaethical creed. After the “collapse” of capital-M Morality as a cosmic given, what remains is human (and animal) priorities. EA can be seen as a project to fulfill the most compassionate and prudent of those priorities in as far-reaching a way as possible. In Parfit’s terms, even if we’re not climbing one true moral mountain, we can still climb a mountain we collectively decide is worth climbing – say, the mountain of “less suffering, more flourishing.”

In conclusion, reports of the death of the moral project might be exaggerated. What’s collapsing is an old vision of morality as something like a fixed blueprint or a divine edict that we simply have to implement. In its place, a new vision is emerging: morality as a human endeavor, deeply important to us but fundamentally shaped by our biology and history, inevitably uncertain in execution, and yet something we can steadily improve upon with reflection, science, and cooperation. Effective Altruism, at its best, is an expression of this new moral project – not finding holy grails of moral truth, but pragmatically and open-mindedly figuring out how to do the most good given our uncertainties and given what we care about. The challenges of cluelessness, infinite ethics, and moral anti-realism are invitations to refine our approach, think critically, and remain adaptable. Far from undermining Effective Altruism, engaging with these deep issues can make the movement more intellectually robust and emotionally resilient. We can maintain our motivation to help others while shedding any naive pretensions of guaranteed moral omniscience. In a morally uncertain universe, choosing to reduce suffering and increase happiness – and constantly questioning how best to do so – might be the closest we come to a moral truth worth holding onto.

Sources:

        •        Lenman, J. (2000). Consequentialism and Cluelessness, Philosophy & Public Affairs, 29(4), 342–370. (Reconstructed argument in  )

        •        Greaves, H. (2016). Cluelessness, Proc. Aristotelian Society, 116(3), 311–339. (Examples of ripple effects in action )

        •        Bostrom, N. (2011). Infinite Ethics. (Highlights the “infinitarian paralysis”: if the universe is infinite, all acts might seem equally insignificant  .)

        •        Bostrom, N. (2009). Pascal’s Mugging, Analysis, 69(3), 443–445. (Thought experiment showing expected-value maximization can recommend absurd actions when payoffs grow faster than probabilities shrink .)

        •        Curry, O. et al. (2019). Is It Good to Cooperate? Testing the theory of morality-as-cooperation in 60 societies, Current Anthropology, 60(1), 47–69. (Evolutionary game theory account of morality; seven universal cooperative morals found across cultures  .)

        •        Tomasello, M. & Vaish, A. (2013). Origins of Human Cooperation and Morality, Annual Review of Psychology, 64, 231–255. (Human morality evolved as a suite of cooperative capacities and norms .)

        •        Persson, I. & Savulescu, J. (2019). The Duty to be Morally Enhanced, Topoi, 38(1), 7–14. (Argues for exploring biomedical moral enhancement – via drugs, neural tech, or genetics – to strengthen altruism and justice .)

        •        Earp, B., Douglas, T., & Savulescu, J. (2017). Moral Neuroenhancement, in Routledge Handbook of Neuroethics. (Surveys potential methods: oxytocin for trust , serotonin modulation for reducing aggression and increasing fairness , TMS and deep-brain stimulation for behavior change .)

        •        Young, L. et al. (2010). Disruption of the Right TPJ with TMS Reduces the Role of Beliefs in Moral Judgments, PNAS, 107(15), 6753–6758. (Empirical demonstration that directly interfering in a brain region alters moral judgments  .)

        •        Mackie, J.L. (1977). Ethics: Inventing Right and Wrong. (Classic argument that “there are no objective values” and that believing in objective moral properties is unwarranted  .)

        •        Parfit, D. (2011). On What Matters (Vol. 1–3). (Defends moral objectivity; proposes that different ethical theories converge on the same truths, suggesting we have reason to act beyond personal preferences .)

        •        MacAskill, W. (2022). What We Owe The Future. (Introduces the idea of historical contingency of values – e.g. abolition of slavery was not guaranteed, today’s moral values could be very different under slight historical changes   – implying no inevitability of moral truth.)

        •        MacAskill, W. (2018). 80,000 Hours Podcast Episode 17. (Discusses the possibility that moral realism might be false and how that would influence the EA outlook; notes that future generations may judge us as moral monsters by our own blind spots .)

Without changing the wording at all. Make the following changes:

1. Change everything to British English

2. Remove all uses of “ which don’t copy and paste well into the effective altruism forum.

3. Remove the sources insofar as they also don’t copy and paste well.  

2

0
0

Reactions

0
0

More posts like this

Comments5
Sorted by Click to highlight new comments since:

Considered writing a similar post about the impact of anti-realism in EA, but I’m going to write here instead. In short, I think accepting anti-realism is a bit worse/wierder for ‘EA as currently’ than you think:

Impartiality 

It broadly seems like the best version of morality available under anti-realism is contractualism. If so, this probably significantly weakens the core EA value of impartiality, in favour of only those who you have a ‘contract’. It might rule out spatially far away people, it might rule out temporally far away people (unless you have an ‘asymmetrical contract’ whereby we are obligated to future generations because past generations were obligated to us), it probably rules out impartiality animals or non-agents/morally incapable beings.

‘Evangelism’

EA generally seems to think that we should put resources into convincing others of our views (bad phrasing but gist is there). This seems much less compelling on anti-realism, because your views are literally no more correct than others. You could counter that ‘we’ have thought more and therefore can help people who are less clear. You could counter that other people have inconsistent views (“Suffering is really bad but factory farms are fine”), however there’s nothing compelling bad about inconsistency on an anti-realist viewpoint either.

Demandingness

Broadly, turning morality into conditionals means a lot of the ‘driving force’ behind doing good is lost. It’s very easy to say “if I want to do good I should do X”, but then say “wow X is hard, maybe I don’t really want to do good after all”. I imagine this affects a bunch of things that EA would like people to do, and makes it much harder practically to cause changes if you outright accept it’s all conditional.

Note: I’m using Draft Amnesty rules for this comment, I reckon on a few hours of reflection I might disagree with some/all of these.

On impartiality, I mostly agree, with one caveat: when you read Hobbes and Rousseau, the social contract is a metaphor. The state of nature serves more like a morality tale, akin to the Garden of Eden, rather than an objective attempt to describe anything. In Hobbes, for example, the ‘acceptance’ of the contract comes from fear and the need for security, not from a 21st-century liberal notion of consent as we would understand it today. It is coerced by necessity. To the extent that they are viable, there are many ways to get what you want from contractualism.


 

The rest of what you say is precisely the point I am attempting to get at. That obligatory driving force which we associate with morality. The kind that legitimises policies and states and gives purpose to individual lives seems to be irrevocably lost, unfortunately. It is like the old story:


 

A traveller comes across three bricklayers working on a construction site and asks each of them what they are doing.


 

The first bricklayer, focused on the immediate task, replies that he is laying bricks.


 

The second, seeing a larger purpose, responds that he is building a wall.


 

The third, with the grandest vision, proclaims that he is building the house of the Lord.


 

That latter kind of purpose is hard to attain with these assumptions. And if you could bottle it, that kind of purpose would be a fruitful resource indeed for EA. 


 

I have only read the summarybot comment but based on that I wanted to leave a literature suggestion that could be interesting to people who liked this post and want to think more about how to put a pragmatic approach to ethics into practice.

Ulrich, W. (2006). Critical Pragmatism: A New Approach to Professional and Business Ethics. In Interdisciplinary Yearbook for Business Ethics. V. 1, v. 1,. Peter Lang Pub Inc. 

Abstract: Major contemporary conceptions of ethics such as discourse ethics and neocontractarian ethics are not grounded in a sufficiently pragmatic notion of practice. They suffer from serious problems of application and thus can hardly be said to respond to the needs of professionals and decision makers. A main reason lies in the tendency of these approaches to focus more on the requirements of ethical universalization than on those of doing justice to particular contexts of action – at the expense of practicality and relevance to practitioners. If this diagnosis is not entirely mistaken, a major methodological challenge for professional and business ethics consists in finding a new balance between ethical universalism and ethical contextualism. A reformulation of the pragmatic maxim (the methodological core principle of American pragmatism) in terms of systematic boundary critique (the methodological core principle of the author’s work on critical systems thinking and reflective professional practice) may provide a framework to this end, critical pragmatism.

Executive summary: The collapse of objective morality challenges traditional ethical frameworks, raising deep uncertainties about long-term consequences, infinite ethics, and morality’s evolutionary origins, but Effective Altruism can adapt by embracing epistemic humility, pragmatic heuristics, and a science-based approach to moral progress.

Key points:

  1. Epistemic cluelessness: Our inability to predict long-term consequences undermines consequentialist decision-making, necessitating heuristics and robustly positive interventions rather than strict expected value calculations.
  2. Infinite ethics problem: If the universe is infinite, standard moral reasoning breaks down, leading to paradoxes where all actions might seem equally (in)significant, requiring new decision rules to navigate ethical paralysis.
  3. Morality as an evolutionary strategy: Ethical intuitions evolved to promote cooperation rather than track objective truth, implying that moral norms are contingent, adaptable, and influenced by cultural evolution.
  4. Prospects for moral enhancement: Advances in AI, neuroscience, and biotechnology could allow deliberate shaping of moral dispositions, but raise ethical concerns about autonomy and unintended consequences.
  5. The collapse of moral bindingness: Without moral realism, ethical claims lack intrinsic authority, but EA can remain action-guiding by focusing on widely shared values like reducing suffering and increasing flourishing.
  6. Implications for EA: The movement should prioritise epistemic humility, avoid fanaticism in decision-making, embrace an empirical approach to moral progress, and reframe its mission in pragmatic rather than absolutist terms.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

This seems like an interesting post that synthesises a range of ideas and draws out some important implications. 

However, at the moment it's essentially a wall of text, which might be deterring people from engaging with the content. For better engagement, I'd recommend improving readability by:

  • Adding an executive summary (for ease, maybe share it with ChatGPT and ask it to do this, then tweak). The introduction clearly sets out the structure, but isn't really a summary. Note that there is a Forum bot that sometimes add an executive summary as a comment, but I think authors should be encouraged to do this themselves.
  • Formatting the section headings (e.g. larger font, bold)
  • Adding subsection headings that summarise the key point of that subsection/paragraph
Curated and popular this week
Relevant opportunities