M

MichaelPlant

9088 karmaJoined

Bio

I'm the Director of the Happier Lives Institute and  Postdoctoral Research Fellow at Oxford's Wellbeing Research Centre. I'm a philosopher by background and did my DPhil at Oxford, primarily under the supervision of Peter Singer and Hilary Greaves.  I've previously worked for an MP and failed to start a start-up

Comments
767

Thanks for this. I think this is very valuable and really appreciate this being set out. I expect to come back to it a few times. One query and one request from further work - from someone, not necessarily you, as this is already a sterling effort!

  1. I've heard Thorstad's TOP talk a couple of times, but it's now a bit foggy and I can't remember where his ends and yours starts. Is it that Thorstad argues (some version of) longtermism relies on the TOP thesis, but doesn't investigate whether TOP is true, whereas you set about investigating if it is true?

  2. The request for further work: 18 is a lot of premises for a philosophical argument, and your analysis is very hedged. I recognise you don't want to claim too much but, as a reader who has thought about this far less than you, I would really appreciate you telling me what you think. Specifically, it would be useful to know which of the premises are the most crucial, in the sense of being least plausible. Presumably, some of 18 premises we don't need to worry about, and our attention can concentrate on a subset. Or, if you think all the premises are similarly plausible, that would be useful to know too!

Hello Bob and team. Looking forward to reading this. To check, are you planning to say anything explicitly about your approach to moral uncertainty? I can't see anything directly mentioned in 5., which is where I guessed it would go. 

On that note Bob, you might recall that, a while back I mentioned to you some work I'm doing with a couple of other philosophers on developing an approach to moral uncertainty along these lines that will sometimes justify the practice of worldview diversification. That draft is nearly complete and your post series inspires me to try and get it over the line!

This report seems commendably thorough and thoughtful. Could you possibly spell out its implications for effect altruists though? I take it the conclusion is that humanity was most violent in the subsistence farming period, rather than before or after, but I'm not sure what to make of that. Presumably, it shows how violent people are changes quite radically in different contexts, so should I be reassured if, as seems likely, modern-type societies will continue? Returns to hunter-gathering and subsistence farming do not seem on the cards.

Sorry if I've missed something. But I reckoned that, if it was obvious to me, some others would missed it too.

Hello Jack, I'm honoured you've written a review of my review! Thanks also for giving me sight of this before you posted. I don't think I can give a quick satisfactory reply to this, and I don't plan to get into a long back and forth. So, I'll make a few points to provide some more context on what I wrote. [I wrote the remarks below based on the original draft I was sent. I haven't carefully reread the post above to check for differences, so there may be a mismatch if the post has been updated]

First, the piece you're referring to is a book review in an academic philosophy journal. I'm writing primarily for other philosophers who I can expect to have lots of background knowledge (which means I don't need to provide it myself).

Second, book reviews are, by design, very short. You're even discouraged from referencing things outside the text you're reviewing. The word limit was 1,500 words - I think my review may even be shorter than your review of my review! - so the aim is just to give a brief overview and make a few comments.

Third, the thrust of my article is that MacAskill makes a disquietingly polemical, one-sided case for longtermism. My objective was to point this out and deliberately give the other side so that, once readers have read both they are, hopefully, left with a balanced view. I didn't seek to, and couldn't possibly hope to, given a balanced argument that refutes longtermism in a few pages. I merely explain why, in my opinion, the case for it in the book is unconvincing. Hence, I'd have lots of sympathy with your comments if I'd written a full-length article, or a whole book, challenging longtermism.

Fourth, I'm not sure why you think I've misrepresented MacAskill (do you mean 'misunderstood'?). In the part you quote, I am (I think?) making my own assessment, not stating MacAskill's view at all. What's more, I don't believe MacAskill and I disagree about the importance of the intuition of neutrality for longtermism. I only observe that accepting that intuition would weaken the case - I do not claim there is no case for longtermism if you accept it. Specifically, you quote MacAskill saying:

[if you endorse the intuition of neutrality] you wouldn’t regard the absence of future generations in itself as a moral loss.

But the cause du jour of longtermism is preventing existential risks in order that many future happy generations exist. If one accepts the intuition of neutrality that would reduce/remove the good of doing that. Hence, it does present a severe challenge to longtermism in practice - especially if you want to claim, as MacAskill does, that longtermism changes the priorities.

Finally, on whether 'many' philosophers are sympathetic to person-affecting views. In my experience of floating around seminar rooms, it seems to be a view of the large minority of discussants (indeed, it seems far more popular than totalism). Further, it's taken as a default, or starting position, which is why other philosophers have strenuously argued against it; there is little need to argue against views that no one holds! I don't think we should assess philosophical truth 'by the numbers', ie polling people, rather than by arguments, particularly when those you poll aren't familiar with the arguments. (If we took such an approach, utilitiarianism would be conclusively 'proved' false.). That said, off the top of my head, philosophers who have written sympathetically about person-affecting views include Bader, Narveson (two classic articles here and here), Roberts (especially here, but she's written on it a few times), Frick (here and in his thesis), Heyd, Boonin, Temkin (here and probably elsewhere). There are not 'many' philosophers in the world, and population ethics is a small field, so this is a non-trivial number of authors! For an overview of the non-identity problem in particular, see the SEP.

Yup, I'd be inclined to agree it's easier to ground the idea life is getting better for humans on objective measures. The is author's comparison is made in terms of happiness though:

This work draws heavily on the Moral Weight Project from Rethink Priorities and relies on the same assumptions: utilitarianism, hedonism, valence symmetry, unitarianism, use of proxies for hedonic potential, and more

I'm actually not sure how I'd think about the animal side of things on the capabilities approach. Presumably, factory farming looks pretty bad on that, so there are increasingly many animals with low/negative capability lives, so unclear how this works out on a global level.

This is a minor comment but you say

There’s compelling evidence that life has gotten better for humans recently

I don't think that is compelling evidence. Neither Pinker nor Karnosfky look at averages of self-reported happiness or life satisfaction, which would be the most relevant and comparable evidence, given your assumptions. According to the so-called Easterlin Paradox average subjective wellbeing has not been going up over the past few decades and won't with further economic growth. There have been years of debates over this (I confess I got sucked in, once) but, either way, there is not a consensus among happiness researchers that there is compelling evidence life has gotten better (at least as far as happiness is concerned). 

While I agree that net global welfare may be negative and declining, in light of the reasoning and evidence presented here, I think you could and should have claimed something like this: "net global welfare may be negative and declining, but it may also be positive and increasing, and really we have no idea which it is - any assessment of this type of is enormously speculative and uncertain".

As I read the post, the two expressions that popped into my head were "if it's worth doing, it's worth doing with made-up numbers" and "if you saw how the sausage is made ...".

The problem here is that all of the numbers for 'animal welfare capacity' and 'welfare percentages' are essentially - and unfortunately - made up. You cite Rethink Priorities for the former, and Charity Entrepreneurship for the latter, and express some scepticism, but then more or less take them at face value. You don't explain how those people came up with numbers and whether they should be trusted. I don't think I am disparaging the good folk at either organisation - and I am certainly not trying to! - because you asked them about this, I think they would freely say "look, we don't really know how to do this. We have intuitions about this, of course, but we're not sure if there's any good evidenced-based way to come up with these numbers";* indeed, that is, in effect, the conclusion Rethink Priorities stated in the write-up of their recent workshop (see my comment on that too). Hence, such numbers should not be taken with a mere pinch of salt, but with a bucketload. 

You don't account for uncertainty here (you used point estimates), and I appreciate that is extra hassle, but I think the uncertainty here is the story. If you were to use upper and lower subjective bounds for e.g. "how unhappy are chickens compared to how happy humans are?", they would be very large. They must be very large because, as noted, we don't even know what factual, objective evidence we would use to narrow them down, so we have nothing to constrain the bounds of what's plausible. But given how large they would be, we'd end up with the conclusion that we really don't know whether global welfare is negative or positive.

 

* People are often tempted to say that we could look at objective measures, like neuron counts, for interspecies comparison. But this merely kicks the can down the road. How do we know what the relationship is between neuron counts and levels of pleasure and pain? We don't. We have intuitions, yes, but what evidence could we point to to settle the question? I do not know. 

Thanks for this and great diagrams! To think about what the relationship between EA and AI safety, it might help about what EA is for in general. I see a/the purpose of EA is helping people figure out how they can do the most good - to learn about the different paths, the options, and the landscape. In that sense, EA is a bit like a university, or a market, or maybe even just a signpost: once you've learnt what you needed, or found what you want and where to go, you don't necessarily stick around: maybe you need to 'go out' in the world to do what calls you. 

This explains your venn diagram: GHD and animal welfare are causes that exist prior to, and independent of EA. They, rather than EA, are where the action is if you prioritise those things. AI safety grew up inside EA. 

I imagine AI safety will naturally form it own ecosytem independent of EA: much like, if you care about global development, you don't need to participate in the EA community, a time will come when, for AI safety, you won't need to participate in EA either. 

This doesn't mean that EA becomes irrelevant, much like a university doesn't stop mattering when students graduate - or a market ceases to be useful when some people find what they want. There will be further cohorts who want to learn - and some people have to stick around to think about and highlight their options.

I suppose you could think of it as a manner of degree, right? Submitting feedback, doing interviews etc. are a good start, but involve people having less of a say than either 1. being part of the conversation or 2. having decision-making power, eg through a vote. People like to feel their concerns are heard - not just in EA, but in general - and when eg. a company says "please send in this feedback form" I'm not sure many people feel as heard as if someone (important) from that company listens to you live and publicly responds.

Thanks for this, which I read with interest! Can I see if I understood this correctly?

  1. You were interested in finding a way to assess the severity of pains in farmed animals so that you can compare severity to duration and determine the total badness. In jargon, you're after a cardinal measure of pain intensity.
  2. And your conclusion was a negative one, specifically that there was no clear way to assess the severity of pain. As you note, for humans, we have self-reports, but for non-human animals, we don't, so have to look for something else, such as how the animals behave. However, there is no obvious non-self-report method that would give us a quantitative measure of pain.
  3. (from 1 and 2) We are forced to rely on our priors, that is, our intuitions, to make comparisons.

For what it's worth, I agree with 1-3, but it does leave me with a feeling of hopelessness about animal welfare comparisons. Certainly, we have intuitions about how to do them, but we do not, as far I can see, have reason to think our intuitions are informed or reliable - what evidence would tell us we were wrong? So, I wonder if it would be true to say that making evidence-based (cardinal) animal welfare comparisons is not merely difficult (which implies they are possible) but actually not possible. I'm not sure what follows from this.

Load more