Here's a recent paper of mine that some EAs might be interested in. The link is to the open-access version. Here’s the preprint for those who prefer LaTeX-style typesetting.
Overview: At least since Derek Parfit’s Reasons and Persons, philosophers have been searching for a satisfactory population axiology: a theory of the value of populations. Unfortunately, the project has proved difficult. Some claim that it’s impossible. Several philosophers offer impossibility theorems which seem to prove that no population axiology can satisfy each of a small number of adequacy conditions. Of these impossibility theorems, Gustaf Arrhenius’s six theorems are perhaps the most compelling.
However, it’s recently been pointed out that each of Arrhenius’s theorems depends on a dubious assumption: Finite Fine-Grainedness. This assumption states, roughly, that you can get from a very positive welfare level to a very negative welfare level via a finite number of slight decreases in welfare. Lexical population axiologies deny Finite Fine-Grainedness, and so can satisfy all of Arrhenius’s plausible adequacy conditions. These lexical views have other advantages as well. They cohere nicely with most people’s intuitions in cases like Haydn and the Oyster, and they offer a neat way of avoiding the Repugnant Conclusion.
In this paper, I rework Arrhenius’s impossibility theorems so that lexical views do not escape them. I point out that, since all of our population-affecting actions have a non-zero probability of bringing about more than one distinct population, it is population prospect axiologies that are of practical relevance. I then prove impossibility theorems which state that no population prospect axiology can satisfy each of a small number of adequacy conditions. These theorems do not depend on Finite Fine-Grainedness, so even lexical views violate at least one of their conditions.
How we should respond to these theorems is another question. Though I don't say it in the paper, I believe that the Total View is as satisfactory as population prospect axiologies get. We should accept the Repugnant Conclusion (and even the Very Repugnant Conclusion) because each of the alternatives is even worse.
Thanks for posting this! If I understand your "risky" assumptions correctly, it seems to be targeted at people who believe (as a simple example):
Is that correct?
If so, what is the argument for believing both of these? My assumption is that someone who thinks that apples are lexically better than oranges would disagree with (2) and believe that any probability of an Apple is better than any probability of an orange.
Side question: the "risky" axioms seem quite similar to the Archimedean axiom in some variants of the VNM utility theorem. I think you also assume completeness and transitivity – are they enough to recover the entire VNM theorem? (I.e. do your axioms imply that there is a real-valued utility function whose expectation we must be trying to maximize?)
This is interesting. It looks like the risky versions would follow from the Archidemean axiom + their non-risky vesions.
I don't think you could get the independence axiom from the other axioms, though. Well, technically anything satisfying all of the axioms would satisfy independence, since nothing satisfies all of the axioms, since it's an impossibility theorem, but if you consider only the risky axioms (or the Archimedean axiom), completeness and transitivity, I don't see how you could get the independence axiom. Maybe maximizing the median value of some standard population axiology like total utilitarianism is a counterexample?
Thanks! Your points about independence sound right to me.
Thanks for your comment! I think the following is a closer analogy to what I say in the paper:
On your side question, I don't assume completeness! But maybe if I did, then you could recover the VNM theorem. I'd have to give it more thought.
Yes! Nice paper! Lexical views don't get as much attention in economics as in philosophy, but it's well worth tracking down and sealing off that apparent leak. And, as you point out, being sensible about risk puts a lot of discipline on our proposals for population ethics.
... so let's stop writing in a way that assumes that avoiding the RC is necessary to be "satisfactory." :) Then a satisfactory population ethics is possible!
Thanks!
And agreed! The title of the paper is intended as a riff on the title of the chapter where Arrhenius gives his sixth impossibility theorem: 'The Impossibility of a Satisfactory Population Ethics.' I think that an RC-implying theory can still be satisfactory.
What goes wrong if we try to use lexical totalism again to avoid your new theorem? You can capture lexicality with a function taking values only in the real numbers, no vectors or anything.
Basically, you just need the maximum difference in the slighter values (which you use l to denote) to never exceed a finite sure difference in the higher value (which you use h to denote). But you can squash the whole real line into a finite interval with a function like arctan. Consider capturing lexical totalism with the function f:Z×R→R defined by
f(h,l)=h+1πarctan(l),where you sum h=∑ihi, hi∈Z and l=∑ili,li∈R across individuals/instances before applying f, and then take the expected value of f for ordering prospects.
By using the integers for the h values, they're spaced out enough that any sure difference in them will always dominate any difference in l, since the range of 1πarctan has length 1. If I understood correctly, this function should also satisfy your risky versions of general non-extreme priority and non-elitism, since for a fixed difference in h, letting the probability of that difference go to 0 makes the difference to the expected value of f go to 0, and so can be outweighed by a finite difference in l. f should also satisfy all of the other exact conditions, since it's the same as lexical totalism in the exact cases.
I discuss this kind of thing more here.
Thanks! This is a really cool idea and I'll have to think more about it. What I'll say now is that I think your version of lexical totalism violates RGNEP and RNE. That's because of the order in which I have the quantifiers. I say, 'there exists p such that for any k...'. I think your lexical totalism only satisfies weaker versions of RGNEP and RNE with the quantifiers the other way around: 'for any k, there exists p...'.
Hmm, and the population X also comes after, rather than having the m,p,k possibly depend on X. It does look like your conditions are more "uniform" than my proposal might satisfy, i.e. you get existential quantifiers before universal quantifiers, rather than existential quantifiers all last (compare continuity vs uniform continuity, and convergence of a sequence of functions vs uniform convergence). The original GNEP and NE axioms have some uniformity, too.
I think informal explanations of the axioms often don't get this uniformity across, but that suggests to me that the uniformity itself is not so intuitive and compelling at all in the first place, and it's doing a lot of the work in these theorems. Especially when the conditions are uniform in the unaffected background population X, i.e. you require the existence of an object that works for all X, that seems to strongly favour separability/additivity/the independence of unconcerned agents, which of course favours totalism.
Uniformity also came up here, with respect to Minimal Tradeoffs.
Yes, that all sounds right to me. Thanks for the tip about uniformity and fanaticism! Uniformity also comes up here, in the distinction between the Quantity Condition and the Trade-Off Condition.
I think there's a typo in the definition of Risky General Non-Extreme Priority (exact formulation): you have a c≥β, but I think that should be c≥b.
Ah no, that's as it should be! c≥β is saying that c is one of the very positive welfare levels mentioned on page 4.