S

ScienceForSeekers

10 karmaJoined Sep 2015

Comments
4

Your question provokes a methodological question for me about existential risk vs. helping people who are alive today. Has anyone incorporated a measure of risk -- in the sense of uncertainty -- into comparing current and future good?

In the language of investment, investors are typically willing to receive lower returns in exchange for less risk. As an investor, I'd rather get a very high probability of a low return than a chancier probability of a high return. You pay for uncertainty.

It seems to me that the more speculative our causes, the higher a benefit-cost ratio we would want to demand. Put another way, it's hard to believe that my actions, unless I'm very lucky, will really have an impact on humans 200 years from now, but a virtual certainty that my actions can have impact on someone now.

I'm interested in whether this thinking has been incorporated into analyses of existential risk.

Hi all, I'm new to the forum so thanks for having me! I've just launched a blog called Science for Seekers (www.scienceforseekers.com), with an emphasis on effective giving (in the context of a broader focus on uniting rationality and meaning/purpose). It's just a little fledgling project, and I'd love any comments/feedback from all of you deep thinkers.

  • Andrew

Very interesting ... with respect to the distinction between being a good person and doing good, I tend to think we underestimate the value of doing good. The archetypal example is Bill Gates, who built a $100 million house but is still (in Peter Singer's view, at least) the largest effective altruist of all time.

I do think the wealth have a greater moral imperative to give money, but I also think we tend to undervalue people's practical impact in favor of their level of martyrdom. If I'm at risk of dying of malaria, I'd much rather have Gates come to my rescue than someone making $50,000 and giving half to charity. I certainly don't think that makes Gates morally better in any way, but he has made life decisions that have increased his giving ability (not to mention being exceptionally fortunate to be born into an affluent, wealthy family at the dawn of the personal computer age, of course).

I generally think we (EAs, but everyone else, too) could use a dose of humility in acknowledging that no one really knows the best way to change the world. We're all guessing, and there is value in other approaches as well (such as making zillions, buying a yacht, and giving some to charity; running for office or supporting a political campaign, spending your time bringing food to your elderly neighbor across the street, building a socially responsible company that hires thousands of people, etc.).

I'm one of those people who has trouble connecting with EA emotionally, even though I fully "get" it rationally. My field is cost-benefit analysis for public programs so I fully understand the moral and statistical basis for giving to the mathematically "correct" charity. But I don't feel any particular personal connection to, say, Deworming the World, so I'm more apt to donate to something I feel connected to.

In EA thinking, emotions and "warm fuzzy" feelings tend to be looked upon disparagingly. However, our emotions and passions are powerful and essential to our humanity, and I think that accomplishing what we want (driving more resources to the needy in the most effective way possible) requires understanding that we are humans, not GiveBots.

To me, one solution is to use the tools of behavioral psychology to encourage people to give more where we want. I'm talking about touching heartstrings, helping us see the actual people we are helping, and talking stories instead of just numbers.

Thanks for the post!