Senior Program Associate, EA Community Growth @ Open Philanthropy
Working (6-15 years of experience)
5632Joined Feb 2016


I'm part of the longtermist EA community building team at Open Philanthropy and Chair of the EA Infrastructure Fund.

Previously I was the Chief of Staff at the Forethought Foundation for Global Priorities Research, participated in the first cohort of FHI's Research Scholars Programme (RSP), and then helped run it as one of its Project Managers.

Before that, my first EA-inspired jobs were with the Effective Altruism Foundation, e.g., running what is now the Center on Long-Term Risk. While I don't endorse their 'suffering-focused' stance on ethics, I'm still a board member there.

Unless stated otherwise, I post on the Forum in a personal capacity, and don't speak for any organization I'm affiliated with.

I like weird music and general abstract nonsense. In a different life I would be a mediocre mathematician or a horrible anthropologist.


Topic Contributions

Tail-effects in education: Since interventions have to scale, they end up being mediocre to "what could be possible."


Related: Bloom's two-sigma problem:

Bloom found that the average student tutored one-to-one using mastery learning techniques performed two standard deviations better than students educated in a classroom environment with one teacher to 30 students

(haven't vetted the Wikipedia article or underlying research at all)

The following link goes to this post rather than the paper you mention:

For reasons just given, I think we should be far more skeptical than some longtermists are. For more, see this paper on simulation theory by me and my co-author Micah Summers in Australasian Journal of Philosophy.

"Moral realism" usually just means that moral beliefs can be true or false. That leaves lots options for explaining what the truth conditions of these beliefs are.

Moral realism is often (though not always) taken to, by definition, also include the claim that at least some moral beliefs are true – e.g. here in the Stanford Encyclopedia of Philosophy. A less ambiguous way to refer to just the view that moral beliefs can be true or false is 'moral cognitivism', as also mentioned here.

This is to exclude from moral realism the view known as 'error theory', which says that moral beliefs are the sorts of things that can have truth values but that all of them are false.

[I'm using "belief" in a loose sense in this comment, on which it is not just true by definition that a belief can be true or false. People using 'belief' in the latter sense would describe the noncognitivist view as saying that those things that appear to be moral beliefs in fact aren't beliefs at all.]

Parfit here is making a reference to Sidgwick's "Government House utilitarianism," which seemed to suggest only people in power should believe utilitarianism but not spread it.

This may be clear to you, and isn't important for the main point of your comment, but I think that 'Government House utilitarianism' is a term coined by Bernard Williams in order to refer to this aspect of Sidgwick's thought while also alluding to what Williams viewed as an objectionable feature of it.

Sigdwick himself, in The Methods of Ethics, referred to the issue as esoteric morality  (pp. 489–490, emphasis mine):

the Utilitarian should consider carefully the extent to which his advice or example are likely to influence persons to whom they would be dangerous: and it is evident that the result of this consideration may depend largely on the degree of publicity which he gives to either advice or example. Thus, on Utilitarian principles, it may be right to do and privately recommend, under certain circumstances, what it would not be right to advocate openly; it may be right to teach openly to one set of persons what it would be wrong to teach to others; it may be conceivably right to do, if it can be done with comparative secrecy, what it would be wrong to do in the face of the world; and even, if perfect secrecy can be reasonably expected, what it would be wrong to recommend by private advice or example. These conclusions are all of a paradoxical character:[372] there is no doubt that the moral consciousness of a plain man broadly repudiates the general notion of an esoteric morality, differing from that popularly taught; and it would be commonly agreed that an action which would be bad if done openly is not rendered good by secrecy. We may observe, however, that there are strong utilitarian reasons for maintaining generally this latter common opinion [...]. Thus the Utilitarian conclusion, carefully stated, would seem to be this; that the opinion that secrecy may render an action right which would not otherwise be so should itself be kept comparatively secret; and similarly it seems expedient that the doctrine that esoteric morality is expedient should itself be kept esoteric. Or if this concealment be difficult to maintain, it may be desirable that Common Sense should repudiate the doctrines which it is expedient to confine to an enlightened few. And thus a Utilitarian may reasonably desire, on Utilitarian principles, that some of his conclusions should be rejected by mankind generally; or even that the vulgar should keep aloof from his system as a whole, in so far as the inevitable indefiniteness and complexity of its calculations render it likely to lead to bad results in their hands.

In his Henry Sidgwick Memorial Lecture on 18 February 1982 (or rather the version of it included in Williams's posthumously published essay collection The Sense of the Past), after quoting roughly the above passage from Sidgwick, Williams says:

On this kind of account, Utilitarianism emerges as the morality of an élite, and the distinction between theory and practice determines a class of theorists distinct from other persons, theorists in whose hands the truth of the Utilitarian justification of non-Utilitarian dispositions will be responsibly deployed. This outlook accords well enough with the important colonial origins of Utilitarianism. This version may be called ‘Government House Utilitarianism’. It only partly deals with the problem, since it is not generally true, and it was not indeed true of Sidgwick, that Utilitarians of this type, even though they are theorists, are prepared themselves to do without the useful dispositions altogether. So they still have some problem of reconciling the two consciousnesses in their own persons—even though the vulgar are relieved of that problem, since they are not burdened with the full consciousness of the Utilitarian justification. Moreover, Government House Utilitarianism is unlikely, at least in any very overt form, to commend itself today.

There has since been the occasional paper mentioning or commenting on the issue, including a defense of esoteric morality by Katarzyna De Lazari-Radek and Peter Singer (2010).

Thank you so much for writing this. This may be very helpful when we start working on non-English versions of What We Owe The Future.

Yes, I also thought that the view that Scott seemed to suggest in the review was a clear non-starter. Depending on what exactly the proposal is, it inherits fatal problems from either negative utilitarianism or averagism. One would arguably be better off just endorsing a critical level view instead, but then one has stopped going beyond what's in WWOTF. (Though, to be clear, it would be possible to go beyond WWOTF by discussing some of the more recent and more complex views in population ethics that have been developed, such as attempts to improve upon standard views by relaxing properties of the axiological 'better than' relation.) See also here.

The Asymmetry is certainly widely discussed by academic philosophers, as shown by e.g. the philpapers search you link to. I also agree that it seems off to characterize it as a "niche view".

I'm not sure, however, whether it is widely endorsed or even widely defended. Are you aware of any surveys or other kinds of evidence that would speak to that more directly than the fact that there are lot of papers on the subject (which I think primarily shows that it's an attractive topic to write about by the standards of academic philosophy)? 

I'd be pretty interested in understanding the actual distribution of views among professional philosophers, with the caveat that I don't think this is necessarily that much evidence for what view on population ethics should ultimately guide our actions. The caveat is roughly because I think the incentives of academic philosophy aren't strongly favoring beliefs on which it'd be overall good to act on, as opposed to views one can publish well about (of course there are things pushing in the other direction as well, e.g. these are people who've thought about it a lot and use criteria for critizing and refining views that are more widely endorsed, so it is certainly some evidence, hence my interest).

FWIW my own impression is closer to:

  • The Asymmetry is widely held to be an intuitive desideratum for theories of population ethics.
    • As usual  (cf. the founding impetus of 'experimental philosophy'), philosophers don't usually check whether the intuition is in fact widely held, and recent empirical work casts some doubt on that.
    • As usual, there are also at least some philosophers trying to 'explain away' the intuition (e.g. in this case Chappell 2017).
  • However, it turns out that it is hard to find a theory of population ethics that rationalizes the Asymmetry without having other problems. My sense is that this assessment – in part due to prominent impossibility theorems – is widely shared, and that there is likely no single widely held specific view that implies the Asymmetry.
  • This is basically the kind of situation that tends to spawn an 'industry' in academic philosophy, in which people come up with increasingly complex views that avoid known problems with previous views, other people point out new problems, and so on. And this is precisely what happened.
  • Overall, it is pretty hard to tell from this how many philosophers 'actually believe' the Asymmetry, in part because many participants in the conversation may not think of themselves as having any settled beliefs on the matter and in part because the whole language game seems to often involve "beliefs" that are at best pretty compartmentalized (e.g. don't explain an agent's actions in the world at large) and at worst not central examples of belief at all (perhaps more similar to how an actor relates to the beliefs of a character while enacting a play).

I think in many ways, the Asymmetry is like the view that there is some kind of principled difference between ideas and matter or that humans have free will of some sort – a perhaps widely held intuition, and certainly a fertile ground for long debates between philosophers, from which, however, it is hard to draw any clear conclusion if you are an agent who (unlike the debating philosophers) faces a high-stakes, real-world action depending on the matter. (It's also different in some ways, e.g. it seems easier to agree on a precise statement of the Asymmetry than for some of these other issues.)

Curious how well this impression matches yours? I could imagine that the impression one gets (like me) primarily from reading the literature may be somewhat different from e.g. the vibe at conferences.

More broadly, living conditions have on average improved enormously since 1920. (And depending on your view on population ethics, you might also think that total human well-being increased by a lot because the world population quadrupled since then.)

This effect is so broad and pervasive that lots of actions by many people in 1920 must have contributed to this, though of course there were some with an outsized effect such as perhaps the invention of the Haber-Bosch process; work by John Snow, Louis Pasteurs, Robert Koch, and others establishing the germ theory of disease; or Florence Nightingale pioneering the use of statistics in healthcare.

One classic example is Benjamin Franklin, who upon his death in 1790

invested £1000 (about $135,000 in today’s money) each for the cities of Boston and Philadelphia: three-quarters of the funds would be paid out after one hundred years, and the remainder after two hundred years. By 1990, when the final funds were distributed, the donation had grown to almost $5 million for Boston and $2.3 million for Philadelphia.

(From What We Owe The Future, p. 24. See notes (1.34) and (1.35) on the WWOTF website here for references. Franklin's bequest is well-known but popular accounts are often slightly off in their details.)

Here's an NYT article from 1990 about the fight over the allocation of the funds after they had grown for 200 years.

I'm not sure what was done ultimately done with them, but according to Wikipedia Boston used it to establish and fun trade school (I think at both the 100- and 200-year marks), the Benjamin Franklin Institute of Technology

Load More