The emerging school of patient longtermism

You might also like to listen to the podcast episode and have a look at the comments in the original post which cover quite a few objections to Will's argument.

For what it's worth I don't think Will ever suggests the hinge was in the past (I might be wrong though). His idea that hinginess generally increases over time probably implies that he doesn't think the hinge was in the past. He does mention that thinking about the past is useful though to get a sense of the overall distribution of hinginess over time which then allows us to compare the present to the future.

The emerging school of patient longtermism

Also I just want to add that Will isn’t implying we shouldn’t do anything about x-risks, just that we may want to diversify by putting more resources into “buck-passing” strategies that allow more influential decision-makers in the future to be as effective as possible

The emerging school of patient longtermism

I think you probably need to read the argument again (but so do I and apologies if I get anything wrong here). Will has two main arguments against thinking that we are currently at the hinge of history (HoH):

1. It would be an extraordinary coincidence if right now was the HoH. In other words our prior probability of that possibility should be low and so we need pretty extraordinary evidence to believe that we are at HoH (and we don't have such extraordinary evidence)

2. Hinginess generally has increased over time as we become more knowledgeable, powerful and hold better values. We should probably expect this trend to continue and so it seems most likely that HoH is in the future

I understand that (critical) feedback on his ideas mainly came in challenging point 1 - many in the EA movement don't think we need to set such a low prior for HoH and think that the evidence that we are at HoH is strong enough.

Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty

Oh that's great. I very much hope that goes well! I hope I didn't give the wrong impression from my comments, I would love to see SWB be taken more seriously in the development economics literature.

What is the increase in expected value of effective altruist Wayne Hsiung being mayor of Berkeley instead of its current incumbent?

Well I guess someone who hasn't heard of EA couldn't say that.

So I don't think that statement is quite as useless as you do. It shows that he:

A) Knows about EA

B) Has at least implied that he wants to use EA thinking in the role

EAs generally tend to think that the cause areas they focus on and the prioritisation they do within those cause areas allow them to be many magnitudes more effective than a typical non-EA. So I might expect him, in expectation, to be more effective than a typical mayor.

I do take your point that that alone isn't much and we will want to examine his track record and specific proposals in more detail.

The problem with person-affecting views

This might depend on how you define welfare. If you define it to be something like "the intrinsic goodness of the experience of a sentient being" or something along those lines, then I would think C being better than B can't really be disputed.

For example if you accept a preference utilitarian view of the world, and under the above definition of welfare, the fact that the person has higher welfare must mean that they have had some preferences satisfied. Otherwise in what sense can we say that they had higher welfare?

If we have this interpretation of welfare I don't think it makes any sense to discuss that C might not be better than B. What do you think?

The problem with person-affecting views

Thanks, I'll check out your writings on VCLU!

The problem with person-affecting views

Thanks. There's a lot to digest there. It's an interesting idea that population ethics is simply separate to the rest of ethics. That's something I want to think about a bit more.

The problem with person-affecting views

Thanks that's interesting. I have more credence in hedonistic utilitarianism than preference utilitarianism for similar reasons to the ones you raise.

The problem with person-affecting views

Thanks for all of this. I think IIA is just something that seems intuitive. For example it would seem silly to me for someone to choose jam over peanut butter but then, on finding out that honey mustard was also an option, think that they should have chosen peanut butter. My support of IIA doesn't really go beyond this intuitive feeling and perhaps I should think about it more.

Thanks for the readings about lexicality and rank-discounted utilitarianism. I'll check it out.

Load More