Hide table of contents

i.e. surprising, interesting, engaging, mind-changing etc...

Can be specific to cause areas or other subsections of effective altruism.

12

0
0

Reactions

0
0
New Answer
New Comment

8 Answers sorted by

If it costs $4000 to prevent a death from malaria, malaria deaths happen at age 20 on average, and life expectancy in Africa is 62 years, then the cost per lifetime saved is $0.0109/hour. 

If you make the average US income of $15.35/hour, this means that every marginal hour you work to donate can be expected to save 1,412 hours of life, if you take the very thoroughly researched, very scalable, low-risk baseline option. If you can only donate 10% of your income, then your leverage is reduced to a mere 141.2. Just by virtue of having been born in a developed country, every hour of your time can be converted to days or weeks of additional life for someone else.

While not as insanely huge as some of the figures making the argument for longtermism, I find this figure more shocking on a psychological level because it's so simple to calculate and yields such an unexpected result. This type of calculation is what first got me interested in Singer-style EA.

The scope neglect examples in "On Caring" 

  1. We are always in an emergency.
    1. We cannot understand or see the extent of suffering that is going on, and have basically no intuition for dealing with this scale of suffering. Our natural response to it becomes indifference.
  2. We are implicitly making decisions of prioritization whether or not we make these decisions consciously. 
    1. Choosing to donate to one charity is choosing to donate to it instead of any other particular charity, whether or not you considered these other options. 

Open Phil has given a total of $140 million to "Potential Risks from Advanced Artificial Intelligence" over all time.

By comparison, some estimates from Nature have "climate-related financing" at around $500 billion annually. That's around 10,000x higher.

So even if you think that Climate Change is much more pressing that AI Safety, you might agree that the latter is much more neglected.

Also note that the majority of that Open Phil funding went to either CSET or Open AI. CSET is more focused on short-term arms races and international power struggles, Open AI only has a small safety team. So even of the $140 million, only a bit is going to technical AI Safety research.

That 11,000 children died yesterday, will die today and are going to die tomorrow from preventable causes. (I'm not sure if that number is correct, but it's the one that comes to mind most readily.)

I find the emphasis on just how much good we can do and how unique this is from a historical perspective engaging. There is an extremely high level of wealth concentrated in rich countries, but in the increasingly connected world we live in, it is possible to have a remarkable impact with well-thought out donations. This goes well with the Will MacAskill argument asking one to think of how great they would feel if they ran into a burning building and saved a child. 

Also find that appeals to psychology that references how EA is not intuitive (e.g. scope neglect, compassion fade, pseudo-inefficacy) convincing

Curated and popular this week
Relevant opportunities