Richard Y Chappell

Associate Professor of Philosophy @ University of Miami
3763 karmaJoined Dec 2018


Academic philosopher, co-editor of, blogs at


Great piece. The reflections on how movements look from the outside vs from the inside seemed very insightful.

I also liked this point about applied moral philosophy: "there are many situations in which utilitarianism guides my thinking, especially as a philanthropist, but uncertainty still leaves me with many situations where it doesn’t have much to offer. In practice, I find that I live my day to day deferring to side constraints using something more like virtue ethics. Similarly, I abide by the law, rather than decide on a case by case basis whether breaking the law would lead to a better outcome. Utilitarianism offers an intellectual North Star, but deontological duties necessarily shape how we walk the path."

If you're worried that a real-life FMF would not be truly symmetrical to AMF in its effects, just mentally replace it with "Minus AMF" in my original comment. (Or imagine stipulating away any such differences.)  It doesn't affect the essential point.

Thanks for explaining!

It is a fair comparison. Andreas' relevant claim is that it isn't clear what the sign of the effect from AMF is. If AMF is negative, then its opposite--FMF--would presumably be positive.

Thanks, yeah, I remember liking that paper. Though I'm inclined to think you should assign (precise) higher-order probabilities to the various "admissible probability functions", from which you can derive a kind of higher-order expected value verdict, which helpfully seems to avoid the problems afaict?

General lesson: if we don't have any good way of dealing with imprecise credences, we probably shouldn't regard them as rationally mandatory. Especially since the case for thinking that we must have imprecise credences (i.e., that any kind of precision is necessarily irrational) seems kind of weak.

I'm a bit surprised that this is getting downvoted, rather than just disagree-voted. It's fine to reach a different verdict and all, but y'all really think the methodological point I'm making here shouldn't even be said?  Weird.

This is a fun paper. But it rests a lot on an unsupported intuition about what's required in order to "take the depth of our uncertainty seriously" (i.e., that this requires imprecise credences with a very wide range of imprecision).  Since this intuition leads to the (surely false) conclusion that a rational beneficent agent might just as well support the For Malaria Foundation as the Against Malaria Foundation, it seems to me that we have very good reason to reject that theoretical intuition.

Two things worth flagging:

(1) Longtermism per se doesn't dictate how we should weigh death vs failing to create life. I personally find it plausible to apply a modest discount against the latter. I think it would be better to bring an extra 100 happy lives into existence than to save just 1 existing person. But you're free to apply a steeper discount if you find that most plausible on reflection.  (That's different from discounting future interests per se, as though future torture mattered less or something.)

(2) There's no reason to focus on abortion in particular; as far as longtermism per se is concerned, any non-procreative choice (e.g. celibacy, contraception, etc.) is relevantly similar.  And as I explain here, pro-natalist incentives are obviously preferable to force. (Just like we shouldn't force people to donate kidneys, good though kidney donation is.)

Mothers are flooded with hormones, the biological purpose of which is to make them value their babies.  It's obviously not the result of dispassionate philosophical assessment.

But you don't have to think that there's nothing wrong with killing non-persons. It could just be a lesser wrong.

The short answer is that cognitive "persons" have a stronger interest in their future than merely potential persons. So if we have stronger person-directed reasons (to help individuals advance their interests) than impersonal reasons to generically promote the good (including by bringing new persons into existence), then that explains why we have stronger moral reasons to save cognitive persons than newborns.

For a longer answer, see McMahan's Time-Relative Account of Interests.

Load more