MichaelPlant

5352Joined Sep 2015

Comments
631

To WELLBY or not to WELLBY? Measuring non-health, non-pecuniary benefits using subjective wellbeing

Yes for both a) and b). But the strategy is secret...

(Basically, the idea is to show using measures of how people feel can and will give different priorities and therefore we should pay more attention to it)

To WELLBY or not to WELLBY? Measuring non-health, non-pecuniary benefits using subjective wellbeing

I'm not sure I understand your point. Kahneman famously distinguishes between decision utility - what people do or would choose - and experience utility - how they felt as a result of their choice. SWB measures allow us to get at the second. How would you empirically test which is the better measure of preferences?

Remuneration In Effective Altruism

Hmm, I don't think you've engaged with my point: there's something odd about very altruistically capable people requiring very high salaries, lest they choose to go and do non-impactful jobs instead. The charity section famously has lower salaries because the work is more intrinsically rewarding than regular corporate fare.

The salaries might have an effect, but I don't think you've shown that in this case - the linked tweet is anecdata. A possibility is that higher salaries in one EA org just pull the better candidates to that org. So I want evidence showing it is pulling in 'new' candidates.

I'm not sure about the fraction of monetised impact bit of that relevant. As someone who runs an org, I only have access to my budget, not the monetised impact - a job might have '£1m a year of impact', but that's um, more than 4x HLI's budget. For someone with enormous resources, eg Open Phil, it might make more sense to think like this.

Of course, it might be that we just have different meanings of 'high' and I would have welcomed if you'd offered a operationalisation in your discussion. I'm not sure I disagree with your conclusion, I just don't think you've proved your case.

Remuneration In Effective Altruism

Stefan, your most important argument seems to be that higher salaries will help with recruitment and motivation. But you don't address the concern there's something a bit puzzling about the most competent effective altruists being motivated by making money for themselves.

If someone says "look, I'll do the work, and I will be excellent, but you have to pay me $150k a year or I walk" I would doubt that were that serious about helping other people. They'd sound more like your classic corporate lawyer than an effective effective altruist.

Deworming and decay: replicating GiveWell’s cost-effectiveness analysis

Ah, I was waiting for someone to bring these up!

On cluster Vs sequence, I guess I don't really understand what the important distinction here is supposed to be. Sometimes, you need to put various assumptions together to reach a conclusion - cost-effectiveness analysis is a salient example. However, for each specific premise, you could think about different pieces of information that would change your view on it. Aren't these just the sequence and cluster bits, respectively? Okay, so you need to do both. Hence, if someone were to say 'that's wrong - you're using sequence thinking' and I think the correct response is to look at them blankly and say 'um, okay... So what exactly are you saying the problem is'?

On cost-effectiveness, I'm going to assume that this is what GiveWell (and others) should be optimising for. And if they aren't optimising for cost-effectiveness then, well, what are they trying to do? I can't see any statement of what they are aiming for instead.

Also, I don't understand why trying to maximize cost-effectiveness will fail to do so. Of course, you shouldn't do naive cost-effectiveness, just like you probably shouldn't be naive in general.

I appreciate that putting numbers on things can sometimes feel like false precision. But that's a reason to use confidence intervals. (Also, as the saying goes, "if it's worth doing, it's worth doing with made up numbers"). Clearly, GiveWell do need to do cost-effectiveness assessments, even if just informally and in their heads, to decide what their recommendations are. But the part that's just as crucial as sharing the numbers is explaining the reasons and evidence for your decision so people can check them and see if they agree. The point of this post is to highlight an important part of the analysis that was missing.

The Charlemagne Effect: The Longtermist Case For Neartermism

[I read this quickly, so sorry if I missed something]

First, on the scope of the argument.  When you talk about "Traditional neartermist interventions", am I right in thinking you *only* have life-saving interventions in mind? Because there are "traditional neartermist interventions", such as alleviating poverty, that not only do not save lives, but also do not appear to have large effects on future population size.

If your claim applies only to the traditionally near-term interventions which increase the population size (in the near term), then you should make that clear.  (Aside, I found the "TLIs" and "TNIs" thing really confusing because the two are so similar and this would be a further reason to change them).

Second, on your argument itself. It seems to rest on this speculative claim.

Those children will go on to have children, and those children will have children, and so on, following an exponential curve. The more time goes by, the more the number of descendants will accelerate until there are millions, billions, or even more future people.

But, as an empirical matter, this is highly unlikely. If you look at the UN projections for world population, it looks like this will peak around 2100.  As countries get richer, fertility - the number of children each woman has - goes down. Fertility rates are below 2 in nearly every rich country (e.g. see this), which is below the 'Replacement level fertility' of 2.1, the average number of children per woman you need to keep the population constant over time. As poorer countries get richer, we can expect fertility to come down there too. So, your claim might be true if the Earth's population was set to keep growing in perpetuity, but it's not. On current projections, saving a life in a high fertility county might lead to one to two generations of above replacement level fertility. (You mention interstellar colonisation, but it's pretty unclear what relationship that would have with Earth's population size). 

All this is really before you get into debates about optimum population, a topic there is huge uncertainty about. 

Leaning into EA Disillusionment

I really appreciated you writing this up. A couple of thoughts.

First, a useful frame: when people are unhappy about a group, any group, they have two choices. They can either 'exit' or 'voice'. This was originally discussed in relation to consumers buying goods, but it's general and as simple as it sounds: you leave or you complain. Which of those people do is more complicated: it depends, afaict, on how loyal they are to the group and also whether they think voicing their concerns will work. 

It seems like what you're saying is that people come into EA, become loyal to the idea, get disillusioned because the reality doesn't live up to the hype, but then exit rather than voice. The reason they exit is some combination not wanting to harm the project or be seen as a bad actor, and because they don't think their criticisms will be listened to. 

Second, lots of this really resonates with me. EA sort of sells itself as being full of incredibly smart, open-minded, kind, dedicated people. And, for the most part, they are - at least by the starts of the rest of the world. But they are people still: prone to ego, irrationality, distraction, championing their pet projects, sticking to their guns, and the rest. And these people work together in groups we call 'organisations'.  And even with the best people, getting them to work together and work differently is a struggle...  It is a recipe for disillusionment.

(I recognise I'm not offering any solutions here, sorry...)

Leaning into EA Disillusionment

I don't buy this. Perhaps I don't understand what you mean.

To press, imagine we're at the fabled Shallow Pond. You see a child drowning. You can easily save them at minimal cost. However, you don't. I point out you could easily have done so. You say "I know but I don't want to do that". I wouldn't consider that a satisfactory response.

If you then said "I don't need to justify my choices on effective altruist grounds" I might just look at your blankly for a moment and say "wait, what has 'effective altruism' got to do with it? What about, um, just basic ethics?" Our personal choices often do or could affect other people.

I don't think that people should endlessly self-flaggelate about whether they are doing enough. You need to recognise it's a marathon, not a sprint, and there are serious limits to what we can will ourselves to do, even if think it would be a good idea in principle. And it's important to be as kind, forgiving, and accepting to ourselves as we think we should be to others we love. But what youve said, taken as face value, seems carte blanche for not trying.

A philosophical review of Open Philanthropy’s Cause Prioritisation Framework

Thanks very much for these comments! Given that Alex - who I'll refer to in the 3rd person from here - doesn’t want to engage in a written back and forth, I will respond to his main points in writing now and suggest he and I speak at some other time.

Alex’s main point seems to be that Open Philanthropy (OP) won't engage in idle philosophising: they’re willing to get stuck into the philosophy, but only if it makes a difference. I understand that - I only care about decision-relevant philosophy too. Of course, sometimes the philosophy does really matter: the split of OP into the ‘longtermism’ and ‘global health and wellbeing’ pots is an indication of this.

My main reply is that Alex has been too quick to conclude that moral philosophy won't matter for OP’s decision-making on global health and wellbeing. Let me (re)state a few points which show, I think, that it does matter and, as a consequence, OP should engage further. 

  1. As John Halstead has pointed out in another comment, the location of the neutral point could make a big difference and it's not obvious where it is. If this was a settled question, I might agree with Alex’s take, but it's not settled. 
  2. Relatedly, as I say in the post, switching between two different accounts of the badness of death (deprivationism and TRIA) would alter the value of life-extending to life-improving interventions by a factor of perhaps 5 or more. 
  3. Alex seems to object to hedonism, but I'm not advocating for hedonism (at least, not here). My main point is about adopting a 'subjective wellbeing (SWB) worldview', where you use the survey research on how people actually experience their lives to determine what does the most good. I’m not sure exactly what OP’s worldview is - that’s basically the point of the main post - but it seems to place little weight on people's feelings (their 'experienced utility') and far more on what they do or would choose (their 'decision utility'). But, as I argue above, these two can substantially come apart: we don’t always choose what makes us happiest. Indeed, we make predictable mistakes (see our report on affective forecasting for more on this).
  4. Mental health is a problem that looks pretty serious on the SWB worldview but appears nowhere in the worldview that OP seems to favour. As noted, HLI finds therapy for depressed people in LICs is about 10x more cost-effective than cash-transfers in LICs. That, to me, is sufficient to take the SWB worldview seriously. I don't see what this necessarily has to do with animals.
  5. Will the SWB lens reveal different priorities in other cases? Very probably - pain and loneliness look more important, economic growth less, etc. - but I can't say for sure because attempts to apply this lens are so new. I had hoped OP's response would be "oh, this seems to really matter, let's investigate further" but it seems to be "we're not totally convinced, so we'll basically ignore this".
  6. Alex says "we don’t think that different measures of subjective wellbeing (hedonic and evaluative) neatly track different theories of welfare" but he doesn’t explain or defend that claim.  (There are a few other places above where he states, but doesn't argue for, his opinion, which makes it harder to have a constructive disagreement.) 
  7. On the total view, saving lives, and fertility, we seem to be disagreeing about one thing but agreeing about another. I said the total view would lead us to reduce the value of saving lives. Alex says it might actually cause us to increase the value of saving lives when we consider longer-run effects. Okay. In which case, it would seem we agree that taking a stand on population ethics might really matter. In which case, I take it we ought to see where the argument goes (rather than ignore it in case it takes us somewhere we don't like).
  8. It seems that Alex’s conclusion that moral philosophy barely matters relies heavily on the reasoning in the spreadsheet linked to in footnote 50 of the technical update blog post. The footnote states "Our  [OP's] analysis tends to find that picking the wrong moral weight only means sacrificing 2-5% of the good we could do". I discussed this above in footnote 3, but I expect it’s worth restating and elaborating on that here. The spreadsheet isn’t explained and it’s unclear what the justification is. I assume the “2-5%” thing is really a motte-and-bailey. To explain, one might think OP is making a very strong claim such as “whatever assumptions you make about morality makes almost no difference to what you ought to do”. Clearly, that claim is implausible. If OP does believe this, that would be an amazing conclusion about practical ethics and I would encourage them to explain it in full. However, it seems that OP is probably making a much weaker claim, such as “given some restrictions on what our moral views can be, we find it makes little difference which ones we pick”. This claim is plausible, but of course, the concern is that the choice of moral views has been unduly restricted. What the preceding bullet points demonstrate is that different moral assumptions (and/or ‘worldviews’) could substantially change our conclusions - it's not just a 2-5% difference.

I understand, of course, that investigating - and, possibly, implementing - additional worldviews is, well, hassle. But Open Philanthropy is a multi-billion dollar foundation that's publicly committed to worldview diversification and it looks like it would make a practical difference.

Wheeling and dealing: An internal bargaining approach to moral uncertainty

Ah, this is very interesting! My only comment on this, which develops the idea, is that the agents would realise that the other agents are also certain, so won't change their minds whatever the oracle pronounces. If we conceive of there as being undecided sub-agents, then the decided sub-agents could think about the value of getting information that might convince them.

Load More