In the conclusion to Moral Uncertainty, Krister Bykvist, Toby Ord, and William MacAskill write:

Every generation in the past has committed tremendous moral wrongs on the basis of false moral views. Moral atrocities such as slavery, the subjection of women, the persecution of non-heterosexuals, and the Holocaust were, of course, driven in part by the self-interest of those who were in power. But they were also enabled and strengthened by the common-sense moral views of society at the time about what groups were worthy of moral concern.

Given the importance of figuring out what morality requires of us, the amount of investment by society into this question is astonishingly small. The world currently has an annual purchasing-power-adjusted gross product of about $127 trillion. Of that amount, a vanishingly small fraction---probably less than 0.05%--goes to directly addressing the question: What ought we to do?

They continue:

Even just over the last few hundred years, Locke influenced the American Revolution and constitution, Mill influenced the [women's] suffrage movement, Marx helped birth socialism and communism, and Singer helped spark the animal rights movement.

This is a tempting view, but I don't think it captures the actual causes of moral progress. After all, there were many advocates for animal well being thousands of years ago, and yet factory farming has persisted until today. At the very least, it doesn't seem that discovering the correct moral view is sufficient for achieving moral progress in actuality.

As quoted in the Angulimālīya Sūtra, the Buddha is recorded saying:

There are no beings who have not been one's mother, who have not been one's sister through generations of wandering in beginningless and endless saṃsāra. Even one who is a dog has been one's father, for the world of living beings is like a dancer. Therefore, one's own flesh and the flesh of another are a single flesh, so Buddhas do not eat meat.

A few hundred years later, in the 3rd century BCE, The Edict of Emperor Ashoka reads:

Here (in my domain) no living beings are to be slaughtered or offered in sacrifice. Nor should festivals be held, for Beloved-of-the-Gods, King Piyadasi, sees much to object to in such festivals

Of course, the slaughter of living beings and consumption of meat would continue for thousands of years. In fact, as I argued previously, the treatment of animals has likely declined since Ashoka's time, and we now undertake factory farming of unprecedented scale and brutality.

I don't believe that this is due to our "false moral views". Unfortunately, we seem unlikely to give up factory farming until we develop cost-competitive lab grown meat, or flavor-competitive plant-based alternatives.

Similarly, the abolition of slavery was plausibly more economically than morally motivated. Per Wikipedia:

...the moral concerns of the abolitionists were not necessarily the dominant sentiments in the North. Many Northerners (including Lincoln) opposed slavery also because they feared that rich slave owners would buy up the best lands and block opportunity for free white farmers using family and hired labor. Free Soilers joined the Republican party in 1854, with their appeal to powerful demands in the North through a broader commitment to "free labor" principles. Fear of the "Slave Power" had a far greater appeal to Northern self-interest than did abolitionist arguments based on the plight of black slaves in the South.

Switching gears, here's a much more explicit case of moral philosophy failing to enable social change. From Peter Singer:

Jeremy Bentham, before the 1832 Reform Act was passed in Britain, argued for extending the vote to all men. And he wrote to his colleagues that he would have included women in that statement, except that it would be ridiculed, and, therefore, he would lose the chance of getting universal male suffrage. So he was aware of exactly this kind of argument. Bentham also wrote several essays arguing against the criminalization of sodomy, but he never published them in his lifetime, for the same reason.

Here we have a case where a moral philosopher explicitly acknowledges that he has discovered a more progressive moral view, but declines to even publish it. So the fact that Bentham made progress in moral philosophy did not allow him to make any actual moral progress. The two are totally decoupled.

What about gay rights? In the Bykvist et al. narrative, some moral philosopher comes around, determines that homosexuality is okay, and everyone celebrates. But what we've seen in the last few decades was not a slow dwindling of homophobia, but a massive resurgence of previously vanquished attitudes. The moral arc has not been monotonic.

So what really did happen? Here's one alternative narrative:

So rather than being driven by moral philosophy, what we have instead is a societal shift driven by a scientific advance, which subsequently allowed rapid liberalization.

Can we investigate the claim on a more macro scale? There are some charts from Our World in Data comparing human rights to GDP per capita:

You could argue that this is really just tracking some underlying variable like "industrialization" or "western culture". But here are some more breakdowns by continent:

Admittedly, even assuming there is a causal relationship, I don't know which way it goes! There are numerous papers demonstrating the link between democracy and economic growth, so there is at least some reason to believe that economic progress is not primary.


Overall, I would guess that "progress" occurs as a confluence of various domains. Perhaps without a social need, or without a moral demand, oral contraceptives would never have been invented in the first place. But I remain skeptical that investing directly in moral philosophy will accelerate humanity's march out of moral atrocities.

There is, after all, currently concentration camps in China, a famine in Yemen, and a genocide in Myanmar. The bottleneck does not seem to be our "false moral views.

Finally, I'd like to posit that the perspective set forth by Bykvist et al. is all too compelling. Consider their statement again: "Every generation in the past has committed tremendous moral wrongs on the basis of false moral views."

It's a comfortable view, and one that allows us to put a kind of moral distance between ourselves and the horrors of the past. We get to say "yikes, that was bad, but luckily we've learned better now, and won't repeat those mistakes". This view allows us, in short, to tell ourselves that we are more civilized than our monstrous ancestors. That their mistakes do not reflect badly on ourselves.

As tempting as it is to wash away our past atrocities under the guise of ignorance, I'm worried humanity just regularly and knowingly does the wrong thing.

Comments4


Sorted by Click to highlight new comments since:

I don't think the view that moral philosophers had a positive influence on moral developments in history is a simple model of 'everyone makes a mistake, moral philosopher points out the mistake and convinces people, everyone changes their minds'. I think that what Bykvist, Ord and MacAskill were getting at is that these people gave history a shove at the right moment.

At the very least, it doesn't seem that discovering the correct moral view is sufficient for achieving moral progress in actuality.

I have no doubt that they'd agree with you about this. But if we all accept this claim, there are two further models we could look at.

One is a model where changing economic circumstances influence what moral views it is feasible to act on, but progress in moral knowledge still affects what we choose to do, given the constraints of our economic circumstances.

The other is a model where economics determines everything and the moral views we hold are an epiphenomenon blown about by these conditions (note this is very similar to some Marxist views of history). Your view is that 'the two are totally decoupled', but at most your examples just show that the two are decoupled somewhat, not that moral reasoning has no effect.  And there are plenty of examples that show explicit moral reasoning having at least some effect on events - see Bykvist, Ord and MacAskill's original list.

The strawman view that moral advances determine everything is not what's being proposed by Bykvist, Ord and MacAskill, it's the mixed view that ideas influence things within the realm of what's possible.

Thanks, this is a good comment.

To some limited degree, some people have some beliefs that are responsive to the strength of philosophical or scientific arguments, and have some actions that are responsive to their beliefs. That's about as weak a claim as you can have without denying any intellectual coherence to things. So then the question becomes, is that limited channel of influence enough to drive major societal shifts?

 Or actually, there might be two questions here: could an insight in moral philosophy alone drive a major societal shift, so that society drifts towards whichever argument is better? and, to what extent has actual moral progress been caused by intellectual catalysts like that?

this post reminds me of scott alexander's post on surviving versus thriving:

I propose that the best way for leftists to get themselves in a rightist frame of mind is to imagine there is a zombie apocalypse tomorrow. [...] It seems broadly plausible that there could be [a neurological switch] for something like “social stability”. If the brain finds itself in a stable environment where everything is abundant, it sort of lowers the mental threat level and concludes that everything will always be okay and its job is to enjoy itself and win signaling games. If it finds itself in an environment of scarcity, it will raise the mental threat level and set its job to “survive at any cost”.

i always thought there was something broadly true in this view, though there is a lot of variation it doesn't explain.

Curated and popular this week
Ben_West🔸
 ·  · 1m read
 · 
> Summary: We propose measuring AI performance in terms of the length of tasks AI agents can complete. We show that this metric has been consistently exponentially increasing over the past 6 years, with a doubling time of around 7 months. Extrapolating this trend predicts that, in under a decade, we will see AI agents that can independently complete a large fraction of software tasks that currently take humans days or weeks. > > The length of tasks (measured by how long they take human professionals) that generalist frontier model agents can complete autonomously with 50% reliability has been doubling approximately every 7 months for the last 6 years. The shaded region represents 95% CI calculated by hierarchical bootstrap over task families, tasks, and task attempts. > > Full paper | Github repo Blogpost; tweet thread. 
 ·  · 2m read
 · 
For immediate release: April 1, 2025 OXFORD, UK — The Centre for Effective Altruism (CEA) announced today that it will no longer identify as an "Effective Altruism" organization.  "After careful consideration, we've determined that the most effective way to have a positive impact is to deny any association with Effective Altruism," said a CEA spokesperson. "Our mission remains unchanged: to use reason and evidence to do the most good. Which coincidentally was the definition of EA." The announcement mirrors a pattern of other organizations that have grown with EA support and frameworks and eventually distanced themselves from EA. CEA's statement clarified that it will continue to use the same methodologies, maintain the same team, and pursue identical goals. "We've found that not being associated with the movement we have spent years building gives us more flexibility to do exactly what we were already doing, just with better PR," the spokesperson explained. "It's like keeping all the benefits of a community while refusing to contribute to its future development or taking responsibility for its challenges. Win-win!" In a related announcement, CEA revealed plans to rename its annual EA Global conference to "Coincidental Gathering of Like-Minded Individuals Who Mysteriously All Know Each Other But Definitely Aren't Part of Any Specific Movement Conference 2025." When asked about concerns that this trend might be pulling up the ladder for future projects that also might benefit from the infrastructure of the effective altruist community, the spokesperson adjusted their "I Heart Consequentialism" tie and replied, "Future projects? I'm sorry, but focusing on long-term movement building would be very EA of us, and as we've clearly established, we're not that anymore." Industry analysts predict that by 2026, the only entities still identifying as "EA" will be three post-rationalist bloggers, a Discord server full of undergraduate philosophy majors, and one person at
 ·  · 2m read
 · 
Epistemic status: highly certain, or something The Spending What We Must 💸11% pledge  In short: Members pledge to spend at least 11% of their income on effectively increasing their own productivity. This pledge is likely higher-impact for most people than the Giving What We Can 🔸10% Pledge, and we also think the name accurately reflects the non-supererogatory moral beliefs of many in the EA community. Example Charlie is a software engineer for the Centre for Effective Future Research. Since Charlie has taken the SWWM 💸11% pledge, rather than splurge on a vacation, they decide to buy an expensive noise-canceling headset before their next EAG, allowing them to get slightly more sleep and have 104 one-on-one meetings instead of just 101. In one of the extra three meetings, they chat with Diana, who is starting an AI-for-worrying-about-AI company, and decide to become a cofounder. The company becomes wildly successful, and Charlie's equity share allows them to further increase their productivity to the point of diminishing marginal returns, then donate $50 billion to SWWM. The 💸💸💸 Badge If you've taken the SWWM 💸11% Pledge, we'd appreciate if you could add three 💸💸💸 "stacks of money with wings" emoji to your social media profiles. We chose three emoji because we think the 💸11% Pledge will be about 3x more effective than the 🔸10% pledge (see FAQ), and EAs should be scope sensitive.  FAQ Is the pledge legally binding? We highly recommend signing the legal contract, as it will allow you to sue yourself in case of delinquency. What do you mean by effectively increasing productivity? Some interventions are especially good at transforming self-donations into productivity, and have a strong evidence base. In particular:  * Offloading non-work duties like dates and calling your mother to personal assistants * Running many emulated copies of oneself (likely available soon) * Amphetamines I'm an AI system. Can I take the 💸11% pledge? We encourage A