I'm currently writing a dissertation on Longtermism with a focus on non-consequentialist considerations and moral uncertainty. Generally interested in the philosophical aspects of global priorities research and planning to contribute to that research also after my DPhil. Before moving to Oxford, I studied philosophy and a bit of economics in Germany where I helped organizing the local group EA Bonn for a couple of years. I also worked in Germany for a few semesters as a research assistant and taught some seminars on moral uncertainty and the epistemology of disagreement.
If you have a question about philosophy, I could try to help you with it :)
I wonder which of these things would have happened (in a similar way) without any EA contribution, and how much longer it would have taken until they would have happened. (In MacAskill's sense: how contingent were these events?) I don't have great answers, but it's an important question to keep in mind.
The problem (often called the "statistical lives problem") is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed.
Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?
Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I'd say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)
Hmm I can't recall all its problems right now, but for one I think that the view is then not compatible anymore with ex ante Pareto - which I find the most attractive feature of the ex ante view compared to other views that limit aggregation. If it's necessary for justifiability that all the subsequent actions of a procedure are ex ante justifiable, then the initiating action could be in everyone's ex ante interest and still not justified, right?
Yes, that's another problem indeed - thanks for the addition! Johann Frick ("Contractualism and Social Risk") offers a "decomposition test" as a solution on which (roughly) every action of a procedure needs to be justifiable at the time of its performance for the procedure to be justified. But this "stage-wise ex ante contractualism" has its own additional problems.
I should also at least mention that I think that the more plausible versions of limiting aggregation under risk are well compatible with classic long-term interventions such as x-risk mitigation. (I agree that the "ex post" view that Emma Curran discusses is not well compatible with x-risk mitigation either, but I think that this view is not much better than the ex ante view and that there are other views that are more plausible than both.) Tomi Francis from GPI has an unpublished paper that reaches similar results. I guess this is not the right place to go into any detail about this, but I think it is even initially plausible that small probabilities of much better future lives ground claims that are more significant than claims that are usually considered irrelevant, such as claims based on the enjoyment of watching part of a football match or on the suffering of mild headaches.
Thanks for your helpful reply! I'm very sympathetic to your view on moral theory and applied ethics: most (if not all) moral theories face severe problems and that is not generally sufficient reason to not consider them when doing applied ethics. However, I think the ex ante view is one of those views that don't deserve more than negligible weight - which is where we seem to have different judgments. Even when taking into consideration that alternative views have their own problems, the statistical lives problem seems to be as close to a "knock-down argument" as it gets. You are right that there are possible circumstances in which the ex ante view would not prioritize identified people over any number of "statistical" people, and these circumstances might be even common. But the fact remains that there are also possible circumstances in which the ex ante view does prioritize one identified person over any number of "statistical" people - and at least to me this is just "clearly wrong". I would be less confident if I knew of advocates of the ex ante view who remain steadfast in light of this problem; but no one seems to be willing to bite this bullet.
After pushing so much that we should reject the ex ante view, I feel like I should stress that I really appreciate this type of research. I think we should consider the implications of a wide range of possible moral theories, and excluding certain moral theories from this is a risky move. In fact, I think that an ideal analysis under moral uncertainty should include ex ante contractualism, only I'm afraid that people tend to give too much weight to its implications and that this is worse than (for now) not considering it at all.
Hey Bob, I'm currently working on a paper about a similar issue, so this has been quite interesting to read! (I'm discussing more generally the implications of limited aggregation, but as you note contractualism has primarily distinct implications because of its (partially) non-aggregative nature.) While I mostly agree with your claims about the implications of the ex ante view, I disagree with your claim that this is the most plausible version of contractualism. In fact, I think that the ex ante view is clearly wrong and we should not be much concerned with what it implies.
First, briefly to the application part. I think you are right that, given the ex ante view, we should not focus on mitigating x-risks, and that we should rather perform global health interventions. However, as you note, there is usually a very large group of potential beneficiaries when it comes to global health interventions, so that the probability for each individual to be benefited is quite small, resulting in heavily diminished ex ante claims. I wonder, therefore, if we shouldn't, on the ex ante view, rather spend our resources on (relatively needy) people we know or people in small communities. Even if these people would benefit from our resources 100+ times less than the global poor, this could well be overcompensated by the much higher probabilities for each of these individuals to actually be benefited.
But again, I think the ex ante view is clearly false anyway. The easiest way to see this is that the view implies that we should prioritize one identified person over any number of "statistical" people. That is: on the ex ante view, we should save a given person for sure rather than (definitely!) save one million people if these are randondomly chosen from a sufficiently large population. In fact, there are even worse implications (the identified person could merely lose a finger and not her life if we don't help), but I think this implication is already bad enough to confidently reject the view. I don't know of anybody who is willing to accept that implication. The typical (if not universal?) reaction of advocates of the ex ante view is to go pluralist and claim that the verdicts of the ex ante view only correspond to one of several pro tanto reasons. As far as I know, no such view has actually been developed and I think any such view would be highly implausible as well; but even if it succeeded, its implications would be much more moderate: all we'd learn is that there is one of several pro tanto reasons that favour acting in (presumably) some short-term way. This could be well compatible with classic long-term interventions being overall most choiceworthy / obligatory.
I'm sure that I'm not telling you much, if anything, new here, so I wonder what you think of these arguments?
I like your analysis of the situation as a prisoner's dilemma! I think this is basically right. At least, there generally seems to be some community cost (or more generally: negative externality) to not being transparent about one's affiliation with EA. And, as per usual with externalities, I expect these to be underappreciated by individuals when making decisions. So even if this externality is not always decisive since the cost of disclosing one's EA affiliation might be larger in some cases, it is important to be reminded of this externality – and the reminder might be especially valuable since EAs tend to be altruistically motivated!
I wonder if you have any further thoughts on what the positive effects of transparency are in this case? Are there important effects beyond indicating diversity and avoiding tokenization? Perhaps there also more 'inside-directed' effects that directly affect the community and not only via how it seems to outsiders?