PhD candidate @ Princeton University
Pursuing a doctoral degree (e.g. PhD)
91Princeton, NJ, USAJoined Apr 2018


I'm a PhD candidate in philosophy at Princeton University. In summer 2023 I'll be a global priorities fellow at GPI.  I work on ethical theory and Buddhist philosophy, with an eye towards global priorities research. My published work is listed here


Hi Joe, thanks for sharing this. I enjoyed it - as I have enjoyed and learned from many of your philosophy posts recently!

A couple things:

1) I'm curious about your thoughts on the role of knowledge in epistemology and decision theory.  You write, e.g., 'Consider the divine commands of the especially-big-deal-meta-ethics spaghetti monster...'. On pain of general skepticism, don't we get to know that a spaghetti monster is not 'the foundation of all being'? (I don't have a strong commitment here, but after talking with a colleague who works in epistemology + decision theory and studied under Williamson, I think this sort of k-first approach is at least worth a serious look.)

2) At risk of being the table-thumping realist, I wanted to press on the nihilist's response. You write that the nihilist has 'other deliberative currency available – “wants,” “cares,” “prefers,” “would want,” “would care,” “would prefer,” and so on.' We then get an example of this style of practical reasoning: '“If I untangle the deer from the barbed wire, then it can go free; I want this deer to be able to go free; OK, I will untangle the deer from the barbed wire”.' 

The first two sentences don't in any way support the third (since 'supports' is a normative relation, and we're in nihilism world). The agent could just as well have thought to herself, 'If I untangle the deer from the barbed wire, then it can go free; I want this deer to be able to go free; OK, I will now read Hamlet.' There's nothing worse about this internal dialogue and sequence of action (assuming the agent does then read Hamlet) because, again, nothing is worse than anything else in nihilism world. 

You ask, 'Who set up this court? We would presumably object if the court only accepted shoulds that were made out of e.g. divine commands, or non-natural frosting. So why not accept the currency of every representative?' I think the realist will want to say: 'the principled distinction is that in the other worlds there some sort of normativity. Whereas in nihilism world there isn't. That's why nihilism doesn't get a seat at the table.' 

As far as I can tell (not being a specialist in metaethics), the best the nihilist can hope for is the "Humean" solution, namely that our natural dispositions will (usually) suffice to get us back in the saddle and keep on with the project of living and pursuing things of "value." ("...fortunately it happens, that since reason is incapable of dispelling these clouds, nature herself suffices to that purpose, and cures me of this philosophical melancholy and delirium, either by relaxing this bent of mind, or by some avocation, and lively impression of my senses, which obliterate all these chimeras. I dine, I play a game of backgammon, I converse, and am merry with my friends; and when after three or four hours' amusement, I would return to these speculations, they appear so cold, and strained, and ridiculous, that I cannot find in my heart to enter into them any farther. 
Here then I find myself absolutely and necessarily determined to live, and talk, and act like other people in the common affairs of life" (Treatise But this does nothing to address the question of whether we have reason to do any of those things. It's just a descriptive forecast about what we will in fact do. 

Really interesting! Do you have anything in mind for goods identified by competing ethical theories that you think would compete with, e.g., the beatific vision for the Christian or nirvana for the Buddhist? (A clear example here would be a valuable update for me.)

+1 on your comment that 'Giving the right answers for the wrong reasons is still deeply unsatisfying.' I think this is an under appreciated part of ethical theorizing and would even take a stronger methodological stance: getting the right explanatory answers (why we ought to do what we ought to) is just as important as getting the right extensional answers (what we ought to do). If an ethical theory gives you the wrong explanation, it's not the right ethical theory!

Hi Michael, thanks for your comments! A few replies:

Re: amplification, I'm not sure about this proposal (I'm familiar with that section of the book). From the perspective of a supreme soteriology (e.g. (certain conceptions of) Christianity), attaining salvation is the best possible outcome, full stop. It is, to use MacAskill, Bykvist, and Ord's terminology, maximally choiceworthy. It therefore seems to me wrong that 'those other views could be further amplified lexically, too, all ad infinitum.' To insist that we could lexically amplify a supreme soteriology would be to fail to take it seriously from its own internal perspective. But that is precisely what MacAskill, Bykvist, and Ord's universal scale account requires us to do.

Of course, I agree that we can amplify other ethical theories that do not, in their standard forms, represent options or outcomes as maximally choiceworthy, such that the amplified theories do represent certain options/outcomes as maximally choiceworthy. But this is rather ad hoc. 

Re: the 'limited applicability' suggestion, this strikes me as prima facie implausible on abudctive grounds (principally, parsimony, and to a lesser extent, elegance). 

Re: the point that 'there are other possible infinities that could dominate': I'm not sure how the term 'dominate' is being used here. It's not the case that other ethical theories which assign infinite choiceworthiness to certain options dominate supreme soteriologies in the game-theoretic useage of 'dominate' (on which option A dominates option B iff the outcome associated with A is at least as good as the corresponding outcome associated with B in every state of nature and strictly better in at least one).

But if the point is rather simply that MEC does not require all agents—regardless of their credence distribution over descriptive and ethical hypotheses—to become religionists, I agree. To take a simplistic but illustrative example, MEC will tell an agent who has credence = 1 that doing whatever they feel like will generate an infinite quantity of the summum bonum to go ahead and do whatever they feel like. My thought is just that MEC will deliver sufficiently implausible verdicts to sufficiently many agents to cast serious doubt on its truth qua theory of what we ought to do in response to ethical uncertainty. This is particularly pressing in the context of prudential choice, due to the three factors highlighted in subsection 3.5 above. The points you make in the linked response to the question 'why not accept Pascal's Wager?' are solid, and lead me to think that the extension of my argument from prudence to morality might not be quite as quick as I suggest at the end of the post. But if we can show that MEC is in big trouble in the domain of prudence, that seems to me like evidence against its candidacy in the domain of morality. (I don't agree with MacAskill, Bykvist, and Ord's suggestion that, on priors, we should expect the correct way to handle descriptive uncertainty to be more-or-less the correct way to handle ethical uncertainty. The descriptive and the ethical are quite different! But it would be relatively more surprising to me if the correct way to handle prudential uncertainty were wildly different from the correct way to handle moral uncertainty.)

Is there any room in the application process for applicants to submit samples of original research or academic letters of recommendation?

Thank you!