A

AppliedDivinityStudies

2258 karmaJoined

Comments
161

Yeah this is all right, but I see EA as being since it's founding, much closer to Protestant ideals than Catholic ones, at least on this particular axis.

If you had told me in 2018 that EA was about "supporting what Dustin Moskovitz chooses to do because he is the best person who does the most good", or "supporting what Nick Bostrom says is right because he has given it the most thought", I would have said "okay, I can see how in this world, SBF's failings would be really disturbing to the central worldview". 

But instead it feels like this kind of attitude has never been central to EA, and that in fact EA embraces something like the direct opposite of this attitude (reasoning from first principles, examining the empirical evidence, making decisions transparently). In this way, I see EA as already having been post-reformation (so to speak).

I'm going off memory and could be wrong, but in my recollection the thought here was not very thorough. I recall some throwaway lines like "of course this isn't liquid yet", but very little analysis. In hindsight, it feels like if you think you have between $0 and $1t committed, you should put a good amount of thought into figuring out the distribution.

One instance of this mattering a lot is the bar for spending in the current year. If you have $1t the bar is much lower and you should fund way more things right now. So information about the movement's future finances turns out to have a good deal of moral value.

I might have missed this though and would be interested in reading posts from before 11/22 that you can dig up.

Oh, I think you would be super worried about it! But not "beat ourselves up" in the sense of feeling like "I really should have known". In contrast to the part I think we really should have known, which was that the odds of success were not 100% and that trying to figure out a reasonable estimate for the odds of success and the implications of failure would have been a valuable exercise that did not require a ton of foresight.

Bit of a nit, but "we created" is stronger phrasing than I would use. But maybe I would agree with something like "how can we be confident that the next billionaire we embrace isn't committing fraud". Certainly I expect there will be more vigilance the next time around and a lot of skepticism.

I am around!
https://twitter.com/alexeyguzey/status/1668834171945635840

Does EA Forum have a policy on sharing links to your own paywalled writing? E.g. I've shared link posts to my blog, and others have shared link posts to their substacks, but I haven't see anyone share a link post to their own paid substack before.

I think the main arguments against suicide are that it causes your loved ones a lot of harm, and (for some people) there is a lot of uncertainty in the future.  Bracketing really horrible torture scenarios, your life is an option with limited downside risk. So if you suspect your life (really the remaining years of your life) is net-negative, rather than commit suicide you should increase variance because you can only stand to benefit.

The idea that "the future might not be good" comes up on the forum every so often, but this doesn't really harm the core longtermist claims. The counter-argument is roughly:
- You still want to engage in trajectory changes (e.g. ensuring that we don't fall to the control of a stable totalitarian state)
- Since the effort bars are ginormous and we're pretty uncertain about the value of the future, you still want to avoid extinction so that we can figure this out, rather than getting locked in by a vague sense we have today

Yeah, it's difficult to intuit, but I think that's pretty clearly because we're bad at imagining the aggregate harm of billions (or trillions) of mosquito bites. One way to reason around this is to think:
- I would rather get punched once in the arm than once in the ribs, but I would rather get punched once in the ribs than 10x in the arm
- I'm fine with disaggregating, and saying that I would prefer a world where 1 person gets punched in the gut to a world where 10 people get punched in the arm
- I'm also fine with multiplying those numbers by 10 and saying that I would prefer 10 people PiG to 100 people PiA
- It's harder to intuit this for really really big numbers, but I am happy to attribute that to a failure of my imagination, rather than some bizarre effect where TU only holds for small populations
- I'm also fine intensifying the first harm by a little bit so long as the populations are offset (e.g. I would prefer 1 person punched in the face to 1000 people punched in the arm)
- Again, it's hard to continue to intuit this for really extreme harms and really large populations, but I am more willing to attribute that to cognitive failures and biases than to a bizarre ethical rule

Etc etc. 

Thanks for the link! I knew I had heard this term somewhere a while back, and may have been thinking about it subconsciously when I wrote this post.

R.e.
> For instance, many people wouldn't want to enter solipsistic experience machines (whether they're built around eternal contentment or a more adventurous ideal life) if that means giving up on having authentic relationships with loved ones.


I just don't trust this intuition very much. I think there is a lot of anxiety around experience machines due to:
- Fear of being locked in (choosing to be in the machine permanently)
- Fear that you will no longer be able to tell what's real

And to be clear, I share the intuition that experience machines seem bad, and yet I'm often totally content to play video games all day long because it doesn't violate those two conditions.

So what I'm roughly arguing is: We have some good reasons to be wary of experience machines, but I don't think that intuition does much to generate a believe that the ethical value of a life necessarily requires some kind of nebulous thing beyond experienced utility. 
 

people alive today have negative terminal value

This seems entirely plausible to me. A couple jokes which may help generate an intuition here (1, 2)

You could argue that suicide rates would be much higher if this were true, but there are lots of reasons people might not commit suicide despite experiencing net-negative utility over the course of their lives.

At the very least, this doesn't feel as obviously objectionable to me as the other proposed solutions to the "mere addition paradox".

 

Load more