SS

Sean Sweeney

30 karmaJoined

Comments
9

Thanks for the post. I just today was thinking through some aspects of expected value theory and fanaticism (i.e., being fanatic about applying expected value theory) that I think might apply to your post. I had read through some of Hayden Wilkinson’s Global Priorities Institute report from 2021, “In defense of fanaticism,” and he brought up a hypothetical case of donating $2000 (or whatever it takes to statistically save one life) to the Against Malaria Foundation (AMF), or giving the money instead to have a very tiny, non-zero chance of an amazingly valuable future by funding a very speculative research project. I changed the situation for myself to consider why would you give $2000 to AMF instead of donating it to try to reduce existential risk by some tiny amount, when the latter could have significantly higher expected value. I’ve come up with two possible reasons so far to not give your entire $2000 to reducing existential risk, even if you initially intellectually estimate it to have much higher expected value:

  1. As a hedge - how certain are you of how much difference $2000 would make to reducing existential risk? If 8 billion people were going to die and your best guess is that $2000 could reduce the probability of this by, say, 1E-7%/year, the expected value of this in a year would be 8 lives saved, which is more than the 1 life saved by AMF (for simplicity, I’m assuming that 1 life would be saved from malaria for certain, and only considering a timeframe of 1 year). (Also, for ease of discussion, I’m going to ignore all the value lost in future lives un-lived if humans go extinct.) So now you might say your $2000 is estimated to be 8 times more effective if it goes to existential risk reduction than malaria reduction. But how sure are you of the 1E-7%/year number? If the “real” number is 1E-8%/year, now you’re only saving 0.8 life in expectation. The point is, if you assigned some probability distribution to your estimate of existential risk reduction (or even increase), you’d find that some finite percentage of cases in this distribution would favor malaria reduction over existential risk reduction. So the intellectual math of fanatical expected value maximizing, when considered more fully, still supports sending some fraction of money to malaria reduction rather than sending it all to existential risk reduction. (Of course, there’s also the uncertainty of applying expected value theory fanatically, so you could hedge that as well if a different methodology gave different prioritization answers.)
  2. To appear more reasonable to people who mostly follow their gut - “What?! You gave your entire $2000 to some pie in the sky project on supposedly reducing existential risk that might not even be real when you could’ve saved a real person’s life from malaria?!” If you give some fraction of your money to a cause other people are more likely to believe is, in their gut, valuable, such as malaria reduction, you may have more ability to persuade them into seeing existential risk reduction as a reasonable cause for them to donate to as well. Note: I don’t know how much this would reap in terms of dividends for existential risk reduction, but I wouldn’t rule it out.

I don’t know if this is exactly what you were looking for, but these seem to me to be some things to think about to perhaps move your intellectual reasoning closer to your gut, meaning you could be intellectually justified in putting some of your effort into following your gut (how much exactly is open to argument, of course).

In regards to how to make working on existential risk more “gut wrenching,” I tend to think of things in terms of responsibility. If I think I have some ability to help save humanity from extinction or near extinction, and I don’t act on that, and then the world does end, imagining that situation makes me feel like I really dropped the ball on my part of responsibility for the world ending. If I don’t help people avoid dying from malaria, I do still feel a responsibility that I haven’t fully taken up, but that doesn’t hit me as hard as the chance of the world ending, especially if I think I have special skills that might help prevent it. By the way, if I felt like I could make the most difference personally, with my particular skill set and passions, in helping reduce malaria deaths, and other people were much more qualified in the area of existential risk, I’d probably feel more responsibility to apply my talents where I thought they could have the most impact, in that case malaria death reduction.

Thanks for the comment and the link to the review paper! 

I think most people, including researchers, don't have a good handle on what self-esteem is, or at least what truly raises or lowers it - I would expect the effect of praise to be weak, but the effect of promoting responsibility for one's emotions and actions to be strong. The closest to my views on self-esteem that I've found so far are those in N. Branden's "Six Pillars of Self-Esteem" - the six pillars are living consciously, self-acceptance, self-responsibility, self-assertiveness, living purposefully, and personal integrity.

Unfortunately, because many researchers don't follow this conception of self-esteem, I tend not to trust much research on the real-world effects of self-esteem. Honestly, though, I haven't done a hard search for any research that uses something close to my conception of self-esteem, and your comment has basically pointed out that I should get on that, so thank you! 

Thank you for sharing some of your struggle. I’ve done a fair amount of personal development work in my life, and it greatly helped me get over work-related stress. Perhaps some of the references in this post could help you with your situation.

In particular, I’d recommend Guttormsen’s udemy course, which you can often get on sale for <$20. I hope that helps some.

Thanks for your post. I’m not exactly part of the EA “community” - I’ve never met someone who was an EA in person, but, at least from people’s online presences, it seems like EA leaders are generally thoughtful, earnest, and open to feedback. I hope your post will be some feedback they’ll consider.

From what I’ve seen of this community so far, I suspect that some EA’s reluctance to support your work could stem from a couple of things:

  1. People don’t like to feel “duped” - I’m not saying you’re trying to pull one over on them, but there’s “safety” in just going with what GiveWell or some other EA vetting organization recommends. I know I wouldn’t feel good if I thought my donations were going to support corruption. So maybe think about how you could better establish credibility - perhaps ask some EA’s if this is a factor and what you could do to allay their fears (honestly, this may be tough, since the prevalence of online scams has made many people pretty skeptical of anything they only interact with online, although Remmelt’s comment seems like an example of something that could give one more confidence).
  2. People don’t want to be wrong - this is related to #1, but instead of worrying about whether your organization is legit or not, it’s a question of if it’s the “best” thing for them to support it over other organizations. Again, it probably feels easier to trust that EA vetters know what they’re doing in their analyses of how to do the most good. I like a lot of what I’ve seen of GiveWell, the EA organization I’ve most looked into, but I personally feel they’re missing a couple of big chunks of the puzzle (and the real world is a hard puzzle, in my opinion). One big chunk is something your organization might bring to the table, but which can be difficult to quantify, namely, promoting personal responsibility and, in turn, self-esteem building due to the taking of more responsibility for oneself. I’m not sure what else you could do to help people see that your organization could be “better than its EA numbers” due to this effect, but from my end, I’ll keep trying to convince people in this community (as here) that it’s a real and significant effect, and hopefully it’ll start to catch on at some point (or someone will convince me I’m wrong, which is always a possibility). 

 

Note that these are just my own “outsider” impressions of EA, and I could very well be mistaken, but I hope this comment might be helpful to both you and EA’s in your efforts to do more good.

Thanks. I don't know the answer to that, although a quick search didn't yield anything too promising. I don't believe the concept of self-esteem as being primarily about personal responsibility has really caught on, so perhaps it would be better to look for studies on interventions to raise personal responsibility.

Thanks for the interesting post - definitely gives one some things to think about! 

Here’s another point to consider: depending on the costs, one of the most effective interventions may be to raise the self-esteem levels of people in high income countries. Note: I’m operating under a model in which self-esteem is determined primarily by personal responsibility level, first for one’s emotions and then one’s actions (see “The Six Pillars of Self-Esteem” by N. Branden). Raising people’s self-esteem levels should raise their self-reported life satisfactions, and also likely have secondary effects on those they interact with. In addition, people who have higher self-esteem due to being more responsible are more likely to practice responsibility in terms of human and animal suffering (as by donating to causes and/or going vegetarian/vegan, with people from high income countries having more potential impact here). Of course, raising self-esteem levels in low income countries could also be quite beneficial - for instance, higher self-esteem people may be more of a driving force for raising health standards in their countries.

Thanks for the interesting post. I'm working on something like what you describe in your "Going Forward" section: https://www.lesswrong.com/posts/gz7eSELFcRYJLdnkE/towards-an-ethics-calculator-for-use-by-an-agi

Planning to post an update about my progress on lesswrong soon.