WilliamKiely

Comments

My Cause Selection: Michael Dickens

I read this post today after first reading a significant portion of it on ~December 2nd, 2019. I'm not sure my main takeaways are from reading it, but wanted to comment to say that it's the best example I currently am aware of someone explaining their cause prioritization reasoning when deciding where to donate. Can anyone point me to more or better examples of people explaining their cause prioritization reasoning?

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

I vaguely recall hearing something like 'the skill of developing the right questions to pose in forecasting tournaments is more important than the skill of making accurate forecasts on those questions.' What are your thoughts on this and the value of developing questions to pose to forecasters?

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

Is forecasting plausibly a high-value use of one's time if one is a top-5% or top-1% forecaster?

What are the most important/valuable questions or forecasting tournaments for top forecasters to forecast or participate in? Are they likely questions/tournaments that will happen at a later time (e.g. during a future pandemic)? If so, how valuable is it to become a top forecaster and establish a track record of being a top forecaster ahead of time?

I'm Linch Zhang, an amateur COVID-19 forecaster and generalist EA. AMA

What were your reasons for getting more involved in forecasting?

Study results: The most convincing argument for effective donations

My entry (475 words):

Morally it is a very good thing to donate to highly-effective charities such as Against Malaria Foundation because the money will go very far to do a lot of good.

To elaborate:

Consider that in relatively-rich developed countries, many governments and people are willing to spend large amounts of money, in the range of $1,000,000-$10,000,000, to avert (prevent) a death. For example, the United States Department of Transportation put the value of a life at $9.2 million in 2014.

In comparison, according to estimates of researchers at the nonprofit GiveWell, which is dedicated to finding outstanding giving opportunities and publishing the full details of their analysis to help donors decide where to give, it only costs about $2,300 to save a life if that money is given to Against Malaria Foundation, one of GiveWell's top charities.

Specifically, consider these four cost-effectiveness estimate results:

GiveWell's 2019 median staff estimate of the "Cost per under-5 death averted" for Against Malaria Foundation is $3,710.

GiveWell's 2019 median staff estimate of the "Cost per age 5+ death averted" for Against Malaria Foundation is $6,269.

GiveWell's 2019 median staff estimate of the "Cost per death averted at any age" for Against Malaria Foundation is $2,331.

GiveWell's 2019 median staff estimate of the "Cost per outcome as good as: averting the death of an individual under 5" for Against Malaria Foundation is $1,690.

These are bargain prices enabling people like you to make your money go very far to do a lot of good, regardless of how much money you give.

If these sound like unbelievably low prices to you given the hundreds of thousands or millions of dollars that it can cost to save a life in developed countries such as the United States, consider that the reality is that millions of people die of preventable diseases every year in very poor countries in Africa and elsewhere. As such, these very inexpensive ways of saving lives very cost-effectively do in fact exist.

Since money you give to Against Malaria Foundation will go very far to do a lot of good to save lives, you should strongly consider donating to Against Malaria Foundation or another highly-effective charity if given the opportunity. Even a donation of just $10 to Against Malaria Foundation or another highly-effective charity will do a lot of good.

Based on GiveWell's cost-effectiveness estimates above, and assuming that averting a death saves about 30 years of life on average, your decision to donate even just $10 to the Against Malaria Foundation will prevent approximately 47 days of life from being lost in expectation.

In summary, it is a very morally good and morally praiseworthy thing to donate to highly-effective charities such as Against Malaria Foundation because the money will go very far to do a lot of good.

My entry is different than all five of the Top 5 entries in that my entry is the only one to not engage with the objection "but what about the value of $10 for myself?"

The primary reason why people don't give presumably is because they'd rather have the money for their own uses.

All five of the Top 5 arguments engage with this idea by implying in one way or another that taking the money for your own use would make you a selfish or bad person.

My entry seems mediocre (in part) because it only highlights the benefits of effective giving to others. It does not attempt to make the reader feel guilty about turning down these bargain opportunities and taking the $10 for oneself.

Study results: The most convincing argument for effective donations

I'm impressed by the top 5 entries, approximately in the order of the mean donation amount they caused.

I submitted an entry to this contest which I thought was decent when I wrote it, but now seems really mediocre upon re-reading it (see my reply to this comment for my entry).

One thing I noticed about all five of the Top 5 arguments (though not my entry) is that they all can be interpreted as guilting the reader into donating. That is, there is an unstated implication the reader could draw that the reader would be a bad person if they chose not to donate:

  • Argument #9: After reading this winning argument, the reader might think: "Now if I don't donate the $10 I'd be admitting that I don't value the suffering of children in poor countries even one-thousandth as much as my own child (or someone I know's child). What a terrible person I'd be. I don't want to feel like a bad person so I'll donate."

  • Argument #3: Someone might think: "Practically everyone agrees that giving to charity is good, so if I don't donate the $10 that would make me bad. I don't want to feel like a bad person so I'll donate."

  • Argument #5: "If I take the $10 rather than donate it, I'd be putting my own interest in receiving $10 above the interests of four children who don't want malaria, which would make me a bad person. I don't want to feel like a bad person so I'll donate."

  • Argument $12: "I just read that I should feel good about whether I decide to 'take' or 'give' the $10. And also that I should prioritize helping a large number of people over the value of $10 for myself. So now I'm not sure that I could feel good about 'taking' the money for myself. I don't want to feel guilty over $10 so I'll donate."

  • Argument #14: "'Every single day you have the opportunity to spare a small amount of money to provide a fellow human with the same basic access to food or drinking water – how often have you done this?' Clearly I'd be a bad person if I decided to take $10 that is offered to me rather than give the $10 to provide a fellow human with basic access to food or drinking water. I don't want to feel like a bad person, so I'll donate.

Can the EA community copy Teach for America? (Looking for Task Y)

Helpful post for thinking about how to scale the EA community to make productive use of more people.

New article from Oren Etzioni

It feels like Etzioni is misunderstanding Bostrom in this article, but I'm not sure. His point about Pascal's Wager confuses me:

Some theorists, like Bostrom, argue that we must nonetheless plan for very low-probability but high-consequence events as though they were inevitable

Etzioni seems to be saying that Bostrom argues that we must prepare for short AI timelines even though developing HLMI on a short timeline is (in Etzioni's view) a very low-probability event?

I don't know whether Bostrom thinks this or not, but isn't Bostrom's main point that even if AI systems sufficiently-powerful to cause an existential catastrophe are not coming for at least a few decades (or even a century or longer), we should still think about and see what we can do today to prepare for the eventual development of such AI systems if we believe that there are good reasons to think that they may cause an x-catastrophe when they eventually are developed and deployed?

It doesn't seem that Etzioni addresses this, except to imply that he disagrees with the view by saying it's unreasonable to worry about AI risk now and by saying that we'll (definitely?) have time to adequately address any existential risk that future AI systems may pose if we wait to start addressing those risks until after the canaries start collapsing.

New article from Oren Etzioni

Etzioni's implicit argument against AI posing a nontrivial existential risk seems to be the following:

(a) The probability of human-level AI being developed on a short timeline (less than a couple decades) is trivial.

(b) Before human-level AI is developed, there will be 'canaries collapsing' warning us that human-level AI is potentially coming soon or at least is no longer a "very low probability" on the timescale of a couple decades.

(c) "If and when a canary “collapses,” we will have ample time before the emergence of human-level AI to design robust “off-switches” and to identify red lines we don’t want AI to cross"

(d) Therefore, AI does not pose a nontrivial existential risk.

It seems to me that if there is a nontrivial probability that he is wrong about 'c' then in fact it is meaningful to say that AI does pose a nontrivial existential risk that we should start preparing for before the canaries he mentions start collapsing.

Load More