GM

Gloria Monday

-22 karmaJoined

Comments
8

I'd appreciate a reply to my comment, even if it's to just to tell me that intrinsic value is inherent to the EA philosophy. 

Ok then you've caused me to update my priors in the direction of  "EA is an intellectually shallow pseudo-religious irrational cult". My comment is 100% sincere and I think well posed. If your only response to it is to attack my motives then I think that reflects very poorly on both you and your ideology.

Sure, but first world economic markets grow faster than third world economies, so deferred donation looks even better when you take this into account.

>Secondly, there are consequences beyond economic productivity

Agreed, but if you consider these types of effects then it's obvious to me that donating to a third world country is worse. I mean, just look at the two cultures: the US is objectively better than e.g. Uganda. The average Ugandan is much more likely to engage in much worse behaviors than eating factory-farmed meat. It's also virtually impossible that they would ever contribute to scientific or technological development. When a reasonable person looks at the US and looks at Uganda and asks "which of these two things do I want more of," everyone would say the US. This is the kind of analysis that I would expect the EA community to embrace. Their whole purpose is "making the world better through rational analysis." In what possible way are you making the world better by diverting resources from a good culture to a bad one? Seriously, how do you justify that? Without positing some quasi-religious intrinsic value (which, for the record, I reject) I just don't see how you can get there.

If spending $X dollars today saves Y lives, then why isn't it better to invest that money and 10 years you'll have $2X dollars that could save 2Y lives? Capital grows exponentially but the value of human life does not, unless you have a principled reason to think that future life is less valuable than current life.

>They're not "objections", because you've misunderstood your target

Then please, explain what I've misunderstood.

Thanks for the link, but most of the links included therein were either broken or argued for exactly my point. For example, the link to the SSC essay concluded with "unless you think the world is more than 70% certain to end before you die, saving like Robin suggests is the best option" .... meaning that it's smarter to invest than to donate. Do you have a better source or argument to present?

Also I'd appreciate it if you could respond to my previous question about the dependence of the EA position on the notion of intrinsic value.

I only included that to discourage low-effort "you can't value human life in dollars you inhuman monster" which is what I got when I posted this on the SSC subreddit. If you're confident that you can make intelligent dispassionate arguments and won't get offended when I argue that life has zero intrinsic value then go right ahead.

Thanks for the reply! 

Are you saying that the notion of intrinsic value is central to philosophical underpinnings of EA? And would you say that, absent it, my objections are correct?

I think that my objections are sound even if you accept that life has intrinsic value. Robin Hanson and Tyler Cowen have made similar arguments so I'm not claiming to be original or anything, but if you can save 1 life for $X today or you can invest that money and save 3 lives for $3X in 10 years, then isn't that the Utility-maximizing thing to do? Future lives don't have any less intrinsic value than present lives do, (c.f. Singer's argument that lives 8,000 miles away don't have any less value than lives next door). The point being that if you care about human flourishing, however defined, then you have to grapple with the notion that the single greatest contributor to human flourishing is economic growth. Any charitable intervention must be judged against the value of directing that resource towards economic growth. Since economic growth is exponential, any life saved today will come at the expense of an exponentially larger set of lives later. It seems to me that any philosophically rigorous EA advocate needs to have a robust response to that issue. Has this been addressed by anyone?