MichaelPlant

6772Joined Sep 2015

Bio

I'm the Director of the Happier Lives Institute and Postdoctoral Research Fellow at Oxford's Wellbeing Research Centre. I'm a philosopher by background and did my DPhil at Oxford.

Comments
678

One other framing of the same thing, which might be more intuitive, is between the idea of effective altruism (use reason to do good better) vs the EA movement (the current group of people doing it and their priorities).

You could be in favour of the former, but not the latter - "I believe in the IDEA of effective altruism, but I think the EA movement is barking up the wrong tree" etc.

Another phrasing would be between "Big EA" and "Small ea", but like we in the UK differentiate between people who are "Big C" conservative, ie support the Conservative party vs "small c conservative" which means you have a conservative disposition but implicitly don't support the Conservative Party.

Just throwing these out in case they are more useful!

There may not have been extended discussions, but there was at least one more recent warning. “E.A. leadership” is a nebulous term, but there is a small annual invitation-only gathering of senior figures, and they have conducted detailed conversations about potential public-relations liabilities in a private Slack group.

 

I don't know about others, but I find it deeply uncomfortable there's an invite-only conference and a private slack channel where, amongst other things, reputational issues are discussed. For one, there's something weird about, on the one hand, saying "we should act with honesty and integrity" and also "oh, we have secret meetings where we discuss if other people are going to make us look bad". 

This strikes me as weirdly one-sided. You're against leaking, but presumably, you're in favour of whistleblowing - people being able to raise concerns about wrongdoing. Would you have objected to someone leaking/whisteblowing that e.g. SBF was misusing customer money? If someone had done so months ago, that could have saved billions, but it would have a breach of (SBF's) trust.

The difference between leaking and whistleblowing is ... I'm actually not sure. One is official, or something?

Hello William,

Thanks for saying that. Yeah, I couldn't really understand where you were coming from (and honestly ended up spending 2+ hours drafting a reply).

On reflection, we should probably have done more WELLBY-related referencing in the post, but we were trying to keep the academic side light. In fact, we probably need to recombine our various scratching on the WELLBY and put them onto a single page on our website - it's been a lower priority than doing the object-level charity analysis work.

If you're doing the independent impression thing again, then, as a recipient, it would have been really helpful to know that. Then I would have read it more as a friendly "I'm new to this and sceptical and X and Y - what's going on with those?" and less as a "I'm sceptical, you clearly have no idea what you're talking about" (which was more-or-less how I initially interpreted it... :) )

Both comments by this author seemed in bad faith and I'm not going to engage with them. 

Hello William, thanks for this. I’ve been scratching my head about how best to respond to the concerns you raise. 

First, your TL;DR is that this post doesn’t address your concerns about the WELLBY. That’s understandable, not least because that was never the purpose of this post. Here, we aimed to set out our charity recommendations and give a non-technical overview of our work, not get into methodological and technical issues. If you want to know more about the WELLBY approach, I would send you to this recent post instead, where we talk about the method overall, including concerns about neutrality, linearity, and comparability.

Second, on scientific validity, it means that your measure successfully captures what you set out to measure. See e.g. Alexandrova and Haybron (2022) on the concept of validity and its application to wellbeing measures. I'm not going to give you chapter and verse on this.

Regarding linearity and comparability, you’re right that people *could* be using this in different ways. But, are they? and would it matter if they did? You always get measurement error, whatever you do. An initial response is to point out that if differences are random, they will wash out as ‘noise’. Further, even if something is slightly biased, that wouldn't make it useless - a bent measuring stick might be better than nothing. The scales don’t need to be literally exactly linear and comparable to be informative. I’ve looked into this issue previously, as have some others, and at HLI we plan to do more on it: again, see this post. I’m not incredibly worried about these things. Some quick evidence. If you look at map of global life satisfaction, it’s pretty clear there’s a shared scale in general. It would be an issue if e.g. Iraq gave themselves 9/10.

Equally, it's pretty clear that people can and do use words and numbers in a meaningful and comparable way

In your MacAskill quotation, MacAskill is attacking a straw man. When people say something is, e.g. "the best", we don't mean the best it is logically possible to be. That wouldn't be helpful. We mean something more like the "the best that's actually possible", i.e. possible in the real world. That's how we make language meaningful.  But yes, in another recent report, we stress that we need more work on understanding the neutral point. 

Finally, and the thing I think you've really missed about all this, is that: if we're not going to use subjective wellbeing surveys to find out how well or badly people's lives are going, what are we going to use instead? Indeed, MacAskill himself says in the same chapter you quote from of What We Owe The Future: 

You might ask, Who am I to judge what lives are above or below neutral? The sentiment here is a good one. We should be extremely cautious to figure our how good or bad others' lives are, as it's so hard to understand the experiences of people with lives very different to one's own. The answer is to rely primarily on self-reports

Hello Richard.  I'm familiar with the back-and-forths between McMahan and others over the nature and plausibility of TRIA, e.g. those in Gamlund and Solberg (2019) which I assume is still the state of the art (if there's something better, I would love to know). However, I didn't want to get into the details here as it would require the introduction of lots of conceptual machinery for very little payoff. (I've even been to a whole term of seminars by Jeff McMahan on this topic when I was at Oxford)

But seeing as you've raised it ... 

As Greaves (2019) presses, there is an issue of which person-stages count:

Are the relevant time-relative interests, for instance, only those of present person-stages (“presentism”)? All actual person-stages (“actualism”)? All person-stages that will exist regardless of how one resolves one’s decision (“necessitarianism”)? All person-stages that would exist given some resolution of one’s decision (“possibilism”)? Or something else again?

Whichever choice the TRIA-advocate makes, they will inherit structurally the same issues for those as one finds for the equivalent theories in population ethics (for those, see Greaves (2017)).

The version of TRIA you are referring to is, I think, actualist person-stage version: if so, then the view is not action-guiding (the issue of normative invariance). If you save the child, it will have those future stages, so it'll be good that it lived; if you don't save the child, it won't, so it won't be bad that it didn't. Okay, should you save the child? Well, the view doesn't tell you either way! 

The actualist version can't be the one at hand, as it doesn't say that it's good (for the child) if you save it (vs the case where you don't). 

I am, I  think, implicitly assuming a present-stage-interest version of TRIA, as that's the one that generates the value-of-death-at-different-ages curve that is relevantly different from the deprivationist one.

Serious question: Tanae, what else would you like to see? We've already displayed the results of the different ethical views, even if we don't provide a means of editing them

Hello Rhyss. We actually hadn't considered incorporating a suicide-reducing effect of talk therapy onto our model. I think suicide rates in eg Uganda, one place where SM works, are pretty low - I gather they are pretty low in low-income countries in general. 

Quick calculation. I came across these Danish numbers, which found that "After 10 years, the suicide rate for those who had therapy was 229 per 100,000 compared to 314 per 100,000 in the group that did not get the treatment." Very very naively, then, that's one life saved via averted suicide per 1,000 treated, or about $150k to save a life via therapy (vs $3-5k for AMF), so probably wouldn't make much difference. But  that is just looking at suicide. We could look at the all-cause mortality effects on treating depression (mental and physical health are often comorbid, etc.).

And read this as you planning to continue evaluating everything in WELLBYs, which in turn I thought meant ruling out evaluating research - because it isn't clear to me how you evaluate something like psychedelics research using WELLBYs.


If we said we plan to evaluate projects in terms of their ability to save lives, would that rule out us evaluating something like research? I don't see how it would. You'd simply need to think about the effect that doing some research would have on the number of lives that are saved. 

Load More