Isaac_Dunn

Posts

Sorted by New

Comments

In defense of a "statistical" life

Thanks for writing this! I especially enjoyed the part where you described how donating has given you a sense of purpose and self-worth when things have been difficult for you - I can relate.

I think I have to disagree with your last point, though, because it seems to me that whenever we make a decision to spend resources, we are making a trade off. A donation to an effective global health charity could in fact have gone to a different cause.

I don't think that diminishes how worthwhile any donation is, but I think that the spirit of effective altruism is to keep asking ourselves whether there's something else we could do that would be even better. What do you think?

Complex cluelessness as credal fragility

I agree that there may be cases of "complex" (i.e. non-symmetric) cluelessness that are nevertheless resiliently uncertain, as you point out.

My interpretation of @Gregory_Lewis' view was that rather than looking mainly at whether the cluelessness is "simple" or "complex", we should look for the important cases of cluelessness where we can make some progress. These will all be "complex", but not all "complex" cases are tractable.

I really like this framing, because it feels more useful for making decisions. The thing that lets us safely ignore a case of "simple" cluelessness isn't the symmetry in itself, but the intractability of making progress. I think I agree with the conclusion that we ought to be prioritising the difficult task of better understanding the long-run consequences of our actions, in the ways that are tractable.

A framework for discussing EA with people outside the community

I enjoyed this article and found it useful, thanks for writing it! I think it could be interesting to think about how these ideas might apply to situations like running a local EA group, where it's not just discussing EA when it comes up organically.

A case against strong longtermism

I second what Alex has said about this discussion being very valuable pushback against ideas that have got some traction - at the moment I think that strong longtermism seems right, but it's important to know if I'm mistaken! So thank you for writing the post & taking some time to engage in the comments.

On this specific question, I have either misunderstood your argument or think it might be mistaken. I think your argument is "even if we assume that the life of the universe is finite, there are still infinitely many possible futures - for example, the infinite different possible universes where someone shouts a different natural number".

But I think this is mistaken, because the universe will end before you finish shouting most natural numbers. In fact, there would only be finitely many natural numbers you could finish shouting before the universe ends, so this doesn't show there are infinitely many possible universes. (Of course, there might be other arguments for infinite possible futures.)

More generally, I think I agree with Owen's point that if we make the (strong) assumption the universe is finite in duration and finite in possible states, and can quantise time, then it follows that there are only finite possible universes, so we can in principle compute expected value.

So I'd be especially interested if you have any thoughts on whether expected value is in practice an inappropriate tool to use (e.g. with subjective probabilities) even assuming in principle it is computable. For example, I'd love to hear when (if at all) you think we should use expected value reasoning, and how we should make decisions when we shouldn't.