I'm currently writing a dissertation on Longtermism with a focus on non-consequentialist considerations and moral uncertainty. Generally interested in the philosophical aspects of global priorities research and planning to contribute to that research also after my DPhil. Before moving to Oxford, I studied philosophy and a bit of economics in Germany where I helped organizing the local group EA Bonn for a couple of years. I also worked in Germany for a few semesters as a research assistant and taught some seminars on moral uncertainty and the epistemology of disagreement.
If you have a question about philosophy, I could try to help you with it :)
Good reply! I thought of something similar as a possible objection against my premise (2) that 80k should fill the role of the cause-neutral org. Basically, there are opportunity costs to 80k filling this role because it could also fill the role of (e.g.) an AI-focused org. The question is how high these opportunity costs are and you point out two important factors. What I take to be important, and plausibly decisive, is that 80k is especially well suited to fill the role of the cause-neutral org (more so than the role of the AI-focused org) due to its biography and the brand it has built. Combined with a 'global' perspective on EA according to which there should be one such org, it seems plausible to me that it should be 80k.
Here is a simple argument that this strategic shift is a bad one:
(1) There should be (at least) one EA org that gives career advice across cause areas.
(2) If there should be such an org, it should be (at least also) 80k.
(3) Thus, 80k should be an org that gives career advice across cause areas.
(Put differently, my reasoning is something like this: Should there be an org like the one 80k has been so far? Yes, definitely! But which one should it be? How about 80k!?)
I'm wondering with which premise 80k disagrees (and what you think about them!). They are indicating in this post that they think it would be valuable to have orgs that cover other individual cause areas such as biorisk. But I think there is strong case for having an org that is not restricted to specific cause areas. After all, we don't want to do the most good in cause area X but the most good, period.
At the same time, 80k seems like a great candidate for such a cause-neutral org. They have done great work so far (as far as I can tell), and they have built up valuable resources (experience, reputation, outputs, ...) through this work that would help them doing even better in the future.
Does he explicitely reject some EA ideas (e.g. longtermism) and does he give arguments against them? If not, it seems a bit odd to me to promote a new school that is like EA in most other important respects. It might be good to have this school additionally anyway, but it feels like its relation to EA and what its additional value might be are obvious questions that should be addressed.
I agree this is a commonsensical view, but it also seems to me that intuitions here are rather fragile and depend a lot on the framing. I can actually get myself to finding the opposite intuitive, i.e. that we have more reason to make happy people than to make people happy. Think about it this way: you can use a bunch of resources to make an already happy person still happier, or you can use these resources to allow the existence of a happy person who would otherwise not be able to exist at all. (Say, you could double the happiness of the existing person or create an additional person with the same level of happiness.) Imagine you create this person, she has a happy life and is grateful for it. Was your decision to create this person wrong? Would it have been any better not to create her but to make the original person happier yet? Intuitively, I'd say, the answer is 'no'. Creating her was the right decision.
I like your analysis of the situation as a prisoner's dilemma! I think this is basically right. At least, there generally seems to be some community cost (or more generally: negative externality) to not being transparent about one's affiliation with EA. And, as per usual with externalities, I expect these to be underappreciated by individuals when making decisions. So even if this externality is not always decisive since the cost of disclosing one's EA affiliation might be larger in some cases, it is important to be reminded of this externality – and the reminder might be especially valuable since EAs tend to be altruistically motivated!
I wonder if you have any further thoughts on what the positive effects of transparency are in this case? Are there important effects beyond indicating diversity and avoiding tokenization? Perhaps there also more 'inside-directed' effects that directly affect the community and not only via how it seems to outsiders?
I wonder which of these things would have happened (in a similar way) without any EA contribution, and how much longer it would have taken until they would have happened. (In MacAskill's sense: how contingent were these events?) I don't have great answers, but it's an important question to keep in mind.
The problem (often called the "statistical lives problem") is even more severe: ex ante contractualism does not only prioritize identified people when the alternative is to potentially save very many people, or many people in expectation; the same goes when the alternative is to save many people for sure as long as it is unclear who of a sufficiently large population will be saved. For each individual, it is then still unlikely that they will be saved, resulting in diminished ex ante claims that are outweighed by the undiminished ex ante claim of the identified person. And that, I agree, is absurd indeed.
Here is a thought experiment for illustration: There are two missiles circulating earth. If not stopped, one missile is certain to kill Bob (who is alone on a large field) and nobody else. The other missile is going to kill 1000 people; but it could be any 1000 of the X people living in large cities. We can only shoot down one of the two missiles. Which one should we shoot down?
Ex ante contractualism implies that we should shoot down the missile that would kill Bob since he has an undiscounted claim while the X people in large cities all have strongly diminished claims due to the small probability that they would be killed by the missile. But obviously (I'd say) we shoud shoot down the missile that would kill 1000 people. (Note that we could change the case so that not 1000 but e.g. 1 billion people would be killed by the one missile.)
Hmm I can't recall all its problems right now, but for one I think that the view is then not compatible anymore with ex ante Pareto - which I find the most attractive feature of the ex ante view compared to other views that limit aggregation. If it's necessary for justifiability that all the subsequent actions of a procedure are ex ante justifiable, then the initiating action could be in everyone's ex ante interest and still not justified, right?
Yeah framed like this, I like their decision best. In the important sense, you could say, they are still cause-neutral. It's just that their cause-neutral evaluation now came to a very specific result: all the most cost-effective careers choices are in (or related to) AI. If this indeed the whole motivation for the strategic shift of 80k, I would have liked this post to use this framing more directly: "we have updated our beliefs on the most impactful careers" rather than "we have made strategic shifts" as the headline. It wasn't clear to me whether the latter is a consequence of only the former, on my first reading.