5632Joined Sep 2015


Great question, thanks for this! Part of the motivation for global desire theories is something like Parfit's addiction case, which I mention in section 3 of the paper and will now quote at length

Parfit illustrates this with his famous case of Addiction:

I shall inject you with an addictive drug. From now on, you will wake each morning with an extremely strong desire to have another injection of this drug. Having this desire will be in itself neither pleasant nor painful, but if the desire is not fulfilled within an hour it will then become very painful. This is no cause for concern, since I shall give you ample supplies of this drug. Every morning, you will be able at once to fulfil this desire. The injection, and its after‐effects, would also be neither pleasant nor painful. You will spend the rest of your days as you do now.31

Parfit points out that on a summative desire theory—on which all your desires count and how your life goes overall is the product of the extent to which each desire is fulfilled and intensity of each desire—your life goes better in Addiction.32 But it is hard to believe one’s life would go better in the Addiction case.

Parfit draws a distinction between local and global desires where a desire is “global if it is about some part of one’s life considered as a whole, or is about one’s whole life”. A global desire theory (GDT), counts only global desires. On this theory, we can say being addicted is worse for us; when we think about how our lives go overall, we do not want to become addicted.

The appeal of a global theory is that, in some sense, you get to make a cognitive choice about which desires count. If you weren't able to choose which desires count, then Addiction would be better for you (once you were actually addicted, anyway).  

You might think that getting addicted really is good for me, in which you've presumably abandoned the global account in favour of the summative one. Which is fine, but doesn't take away from the fact that automaximisation is still a problem for the global view. 

I'm really pleased to see GiveWell is doing this, and particularly that you singled out HLI's critique of GiveWell's deworming CEA as an example of what you'd like to see

I am, however, disappointed that the scope of the competition is so narrow and a bit confused by its name.  The contest page says you do want people to re-analyse your existing interventions but that you don't want them to suggest different interventions or make  'purely subjective arguments' - I'm not sure what the latter bit means, but I guess it rules out any fundamental discussions about ethical worldviews or questions of how best to measure ‘good’. On this basis, it seems like you're asking people not to try to change your mind, but rather to check your working.

This strikes me as a lost opportunity. After all, rethinking what matters and what the top interventions are could be where we find the biggest gains.

At the risk of being a noisy, broken record, I, and the team at HLI, have long-advocated measuring impact using self-reports and argued that this could really shake up the priorities (spot the differences between these 2016, 2018 and 2022 posts). Our meta-analyses recently found that treating depression via therapy is about 9x more cost-effective than cash transfers (2021 analysis; 2022 update), we'd previously explored how to compare life-improving to life-saving interventions using the same method and pointed out how various philosophical considerations might really change the picture (2020).

I'm still not really sure what GiveWell thinks of any of this. There's been no public response except that, 9 months ago, GiveWell said they were working on their own reports on group therapy and subjective wellbeing and expected to publish those in 3-6 months. It looks like all this work would fall outside this competition but, if GiveWell were open to changing their mind, this would be one good place to look.

I quite like the idea of an EAG: Open, but presumably as a complement, rather than replacement, to the current networking-focused EAGlobal.

One thing that seems missing from the EA ecosystem is a single place where there are talks which convey new information to lots of interested, relevant people in one go, and those ideas can be discussed.

This used to happen at EAGlobal, but it doesn't anymore because (for understandable reasons) the event is very networking focused, so talks basically got canned. I find it odd there's now so little public discussion at the EA community's flagship event. (The only major communication that happens is at the opening and closing ceremonies, and is (always?) done by Will.  Will is great, but it would be great to have a diversity of messages and messengers.) 

There is more content at EAGxs, but then only a fraction of people see those. I've realised I'm basically touring the world giving more-or-less the same talk so most people hear it once. In some ways, this is quite fun, but it's also pretty inefficient. I'd prefer to give that talk once and then be able to move onto other topics.

The EA forum currently serves as the central place for discussion, but it's not that widely used and stuff tends to disappear from view pretty fast. It certainly doesn't do the same thing as TED-style big talks do for communicating important ideas.

This impression strikes me as basically spot on. It would have been more accurate for me to say it's taken to be a "widely held to be an intuitive desideratum for theories of population ethics". It does have its defenders, though, e.g. Frick, Roberts, Bader. I agree that there does not seem to be any theory that rationalises this intuition without having other problems (but this is merely a specific instance of the general case that there seems to be no theory of population ethics that retains all our intuitions - hence Arrhenius' famous impossibility result).

I'm not aware of any surveys of philosophers on their views on population ethics. AFAIT, the number of professional philosophers who are experts in population ethics - depending on how one wants to define those terms - could probably fit into one lecture room.

The intuition seems to be almost universally held. I agree many philosophers (and others) think that this intuition must, on reflection, be mistaken. But many philosophers, even after reflection, still think the procreative asymmetry is correct. I'm not sure how interesting it would be to argue about the appropriate meaning of the phrase "very widely held". Based on my (perhaps atypical) experience, I guess that if you polled those who had taken a class on population ethics, I expect about 10% would agree with the statement "the procreative asymmetry is a niche position".

The Procreative Asymmetry is very widely held, and much discussed, by philosophers who work on population ethics (and seemingly very common in the general population). If anything, it's the default view, rather than a niche position (except among EA philosophers). If you do a quick search for it on philpapers.org there's quite a lot there. 

You might think the Asymmetry is deeply mistaken, but describing it as a 'niche position' is much like calling non-consequentialism a 'niche position'. 

You object to the MacAskill quote

If we think it’s bad to bring into existence a life of suffering, why should we not think that it’s good to bring into existence a flourishing life? I think any argument for the first claim would also be a good argument for the second.

And then say 

Indeed, many arguments support the former while positively denying the latter. One such argument is that the presence of suffering is bad and morally worth preventing while the absence of pleasure is not bad and not a problem,

But I don't see how this challenges MacAskill's point, so much as restates the claim he was arguing against. I think he could simply reply to what you said by asking, "okay, so why do we have reason to prevent what is bad but no reason to bring about what is good?" 

Not sure if this is essential to the parable, but wouldn't it be useful to distinguish between the following cases?:

(1) the boy says every day there's 5% chance the wolf could come every evening, but isn't saying the wolf is there right now


(2) the boy says there is a wolf in the village whenever he thinks there's a 5+% chance this is true.

If the boy is doing (1) and the villagers panic now then they've just misunderstood what he's saying. If the boy is doing (2), then you'd understand why the villagers would start ignoring the boy (just like everyone ignores car alarms because they are so oversensitive).

I'm not sure either is neatly analogous to the X-risk case, which is that different people give different estimates and these range from negligible to doom being apparently virtually certain. I guess that's a bit like there being many different boys in the village, each of whom assigns a different percentage chance to the wolf appearing at some point (but none of whom are claiming it's literally here now).

I found this discussion, and these cases, objectionably uncharitable. It doesn't offer the strongest version of person-affecting views, explain why someone might believe them, then offer the objections and how the advocate of the view might reply. It simply starts by assuming a position is true and then proposes some quick ways to persuade others to agree with it.

An equivalent framing in a different moral debate would be saying something like "people don't realise utilitarianism is stupid. If they don't realise, just point out that utilitarians would kill someone and distribute their organs if they thought it would save more lives". I don't think the forum is the place for such one-sidedness.

Yes for both a) and b). But the strategy is secret...

(Basically, the idea is to show using measures of how people feel can and will give different priorities and therefore we should pay more attention to it)

Load More