Hello Ian. Could you say a bit what providing strategy and research looks like? I don't have an intuitive grasp on what sort of things that involves and I'd appreciate an example or two!
FWIW, I think it helps to think of effective altruism along the following lines. This is more or less taken from chapters 5 and 6 of my PhD thesis which got stuck into all this in tedious (and, in the end, rather futile) depth.
Who? As in, who are the beneficiary groups?
Options: people (in the near-term), animals (in the near-term), future sentient life
What? As in, what are the problems?
This gives you your cause areas, i.e. the problems you want to solve that directly benefit a particular group, e.g. poverty, factory farming, X-risks.
Effective altruism is a practical project, ultimately concerned about what the best actions are. To solve a problem requires thinking, at least implicitly, about particular solutions to those problems, so I think it's basically a nonsense to try to compare "cause areas" without reference to specific things you can do, aka solutions. Hence, when we say we're comparing "cause areas" what we are really doing is assessing the best solution in each cause area "bucket" and evaluating their cost-effectiveness. The most important cause = the one with the very most cost-effective intervention.
How? As if, how can the problems be best solved?
Here, I think it helps to distinguish between interventions and barriers. Interventions are the thing you do that ultimately solve the problem, e.g, cash transfers and bednets for helping those in poverty. You can then ask what are the barriers, i.e. the things that stop those interventions from being delivered. Is it because people don't know about it? Do they want them but can't afford them, etc? A solution removes a particular barrier to a particular intervention, e.g. just provides a bednet.
What's confusing is where to fit in things like "improving rationality of decision-makers" and "growing the EA movement", which people sometimes call causes. I think of these as 'meta-causes' because they indirect and diffusely work to remove the barrier to many of the 'primary causes', e.g. poverty.
It's not clear we need answers to the 'why?', 'when?', and 'where?' queries. Like I say, if you want to waste an hour or two, I slog through these issues in my thesis.
I think you're right to point out that we should be clear about exactly what's repugnant about the repugnant conclusion. However, Ralph Bader's answer (not sure I have a citation, I think it's in his book manuscript) is that what's objectionable about moving from world A (take as the current world) to world Z is that creating all those extra lives isn't good for the new people, but it is bad for the current population, whose lives are made worse off. I share this intuition. So I think you can cast the repugnant conclusion as being about population ethics.
FWIW, I share your intuition that, in a fixed population, one should just maximise the average.
Strong upvote. I thought this was a great reply: not least because you finally came clean about your eyes, but because I think the debate in population ethics is currently too focused on outputs and unduly disinterested in the rationales for those outputs.
Ah, I see. No, you've got it right. I'd somehow misread it and the view works the way I had thought it was supposed to: non-existence as zero is not-existence can be compared to existence in terms of welfare levels.
Right. So, looking at how HMW was specified up top - parts II and III - then people who exist in only one of two outcomes count for zero even if they have negative well-being in the world where they exist. That what how I interpreted the view as working in my comment.
One could specify a different view on which creating net-negative lives, even if they couldn't have had a higher level of welfare, is bad, rather than neutral. This would need a fourth condition.
(My understanding is that people who like HMVs tend to think that creating uniquely exist negative lives is bad, rather than neutral, as that captures that procreative asymmetry.
I found this post very thought-provoking (I want to write a paper in this area at some point) so might pop back with a couple more thoughts.
Arden, you said this decreased your confidence that person-affecting views can be made to work, but I'm not sure I understand your thinking here.
To check, was this just because you thought the counterpart stuff was fishy, or because you thought it has radical implications? I'm assuming it's the former, because it wouldn't make sense to decrease one's confidence in a view on account of it's more or less obvious implications: the gist of person-affecting views is that they give less weight to merely possible lives than impersonal views do. Also, please show me a view in population ethics without (according to someone) 'radical implications'!
(Nerdy aside I'm not going to attempt to put in plain English: FWIW, I also think counterpart relations are fishy. It seems you can have a de re or de dicto person-affecting views (I think this is the same as the 'narrow' vs 'wide' distinction). On the former, what matters is the particular individuals who do or will exist (whatever we do). On the latter, what matters is the individuals who do or will exist, whomsoever they happen to be. Meacham's is of the latter camp. For a different view which allows takes de dicto lives as what matters see Bader (forthcoming).
It seems to me that, if one is sympathetic to person-affecting views, it is because one finds these two theses plausible 1. only personal value is morally significant - things can only be good or bad if they are good or bad for someone and 2. non-comparativism, that is, that existence can not be better or worse for someone than non-existence. But if one accepts (1) and (2) it's obvious lives de re matter, but unclear why one would care about lives de dicto. What makes counterpart relations fishy is that they are unmotivated by what seem to be the key assumptions in the area.
Thanks a lot for writing this up! I confess I'd had a crack at Meacham's paper some time ago and couldn't really work out what was going on, so this is helpful. One comment.
I don't think the view implies what you say it implies in the Your Reaction part. We have only two choices and all those people who exist in one outcome (i.e. the future people) have their welfare ignored on this view - they couldn't have been better off. So we just focus on the current people - who do exist in both "bomb" and "not-bomb". Their lives go better in "not-bomb". Hence, the view says we shouldn't blow up the world, not - as you claim - that we should. Did I miss something?
It strikes me that Deaton has, in theory, got a point. To put a label on it, one should not do 'randomisation (or replication) without explanation'. Regarding Russell's chicken, the flaw with the chicken assuming it will get fed today is because it hasn't understood the structure of reality. Yet this does not show one should, in practice, give up on RCTs and replication, only that one should use them in combination with a thoughtful understanding of the world.
For Deaton's worry to have force, one would need to believe that because one context might be different from another, we should assume it is. Yet, saliently, that doesn't follow. There could be a fairly futile argument about on whom the burden of proof lies to show that one context of replication is relevantly like another, but it seems the dutiful next thing to do would for advocates to argue why they think it is and critics to argue why it isn't.
I am intrigued by his separate point that getting governments to be more receptive to their citizens is a valuable intervention - the point being that, in poor countries, the governments collect so little tax from those in poverty they feel little incentive to notice them.
Thanks for bringing this up. I've been mulling on this for a while and might write something myself. A couple of thoughts.
If you discover you could be doing a lot more good than you currently are, you could have (at least) two reactions: disappointment that you haven't been doing more in the past and/or excitement that you could do better in the future. Both of these perspectives are valid and it seems you could focus on either one.
For those who, like me, tend to find it quite easy to be disappointed with and hard on themselves, I might help to think "well, the past has happened. There's nothing you can do about that now. So let's look to the future."
The title of this post made me think you were going to talk about something else, which is whether those who aren't in the top 1% of a given field (I suppose this most naturally applies in academia) have very little impact. I don't know if this is true - it's certainly the sort of thing people believe, but it might just be folk wisdom.
It does strike me as true that the people at the top of a field have a disproportionate share of the impact.
What does that imply you should do if you're not in the top 1% and what to do the most good? Well, maybe you should keep going in your field but maybe you should switch. Depends on context.
A totally separate question is how you should feel if you aren't one of those people having a huge impact.
I take it I should be trying to do the most good I can do, emphasis on the 'I'. I can't be anyone else, so it's irrelevant, in some important sense, whether or not others do more (or less). The right comparison is between how much you do in your actual life compared to the other lives you could have led. The important bit is that I am trying my best. Nothing more can be asked because nothing more can be given.