Luke Eure

Wiki Contributions

Comments

Don’t wait – there’s plenty more need and opportunity today

"I assume that the crux here is that GiveDirectly believes that spending more money now would have a good publicity effect, that would promote philanthropy and raise the total amount of donations overall.
I would change my mind if this was the case, but I don't see this as obvious."

 

I'm not entirely sure what the answer is here either, but one thought I had today was "I should make a Facebook post for Thanksgiving/Christmas telling my friends why I think it's so important to donate to GiveWell - your marginal donation can save a life for $3-5k! Ah, but actually GiveWell won't disburse the marginal dollar I donate this year, so I can't really make that argument this year."

 

I do think from an optics perspective, when the draw from GiveWell is that your marginal dollar will actually help save someone's life, it's discouraging to see "you're marginal dollar will help save someone's life - in 3 years when we no longer need to roll over funds". It pushes me in the direction of "well I'll donate somewhere else this year and then donate to GiveWell in 3 years". And I know that's not the right calculation from a utility perspective - I should donate to the most cost-effective charity with little-to-no time discounting. But most people outside EA who might be attracted to effective giving have a yearly giving budget that they want to see deployed effectively in the near-term.

Don’t wait – there’s plenty more need and opportunity today

I'm not GiveDirectly, but in my view. It does make sense for GiveWell to deprioritise doing a more in-depth evaluation of GiveDirectly given resource constraints. However, when GiveWell repeatedly says in current research that certain interventions are or "5-8x cash", I think it would be helpful for them to make it more clear that it might be only "2-4x cash" - they just haven't had the time to re-evaluate the cash

A Red-Team Against the Impact of Small Donations

That’s helpful thank you! I think the mode is more “I’m going to give OpenPhil more money”. It only becomes “I’m going to give Dustin more money” if it’s true that Dustin adjusts his donations to OpenPhil every year based on how much OpenPhil disburses, such that funging OpenPhil = funging Dustin

But in any case I’d say most EAs are probably optimistic that these organizations and individuals will continue to be altruistic and will continue to have values we agree with.

And in any any case, I strongly agree that we should be more entrepreneurial

A Red-Team Against the Impact of Small Donations

Thanks for the great post (and for your great writing in general)! It mostly makes a ton of sense to me, though I am a bit confused on this point:

"If Benjamin's view is that EA foundations are research bottlenecked rather than funding bottlenecked, small donations don't "free up" more funding in an impact-relevant way."

EA foundations might be research bottlenecked now, but funding bottlenecked in the future. So if I donate $1 that displaces a donation that OpenPhil would have made, then OpenPhil has $1 more to donate to an effective cause in the future when we are not funding constrainedthe future.

So essentially, a $1 donation by me now is an exercise in patient philanthropy, with OpenPhil acting as the intermediary. 

 Does this fit within your framework, or is there something I'm missing? 

I don't think this "changes the answer" as far as your recommendation goes - we should fund more individuals, selves, and weirdos.

Make a $100 donation into $200 (or more)

Thank you so much for sharing! From the dashboard it looks like they've upped the matching fund to $350K (adding $100K from the original $250K). 

The most important century and the representativeness of EA

No problem, thanks for doing the Q&A and for the suggestions!  Happy if you want to share it with the Hispanic EA community

The most important century and the representativeness of EA

Thank you so much for sharing! Agreed that it's important regardless of if the century is the most important. If you're interested, see my response to Linch above on this.

I watched the Q&A and wrote up notes to it as I was watching - thought I would make them sharable in case anyone else involved in community organising wants to see the main points but doesn't have time to watch! Notes here.

The most important century and the representativeness of EA

Ah yes that sounds super relevant!

Unfortunately the paper is behind a paywall and I'm not a student. And while it might be fine from an individual morality basis to pay for philosophy papers I object to the academic journal system that requires it, so I can't in good conscience shell out $45 to read it ;)

Thanks for sharing though!

[And thanks for the handy stats]

The most important century and the representativeness of EA

Ah good distinction! Agree I was not clear on that in my post (and to be honest, my thinking on it wasn't very clear either before you pointed out this distinction).

In part I am arguing for proposition 2. If it is the most important century, all long-term causes become more important relative to near-term causes. So at the very least, if it is the most important century, raising the representativeness of EA increases in importance relative to e.g., distributing bednets (1).

But what I'm really arguing for,  is that representativeness is more important for long-termism than most people in EA seem to think it is. And if you were underrating the importance of raising EA's representativeness (as I think the EA community does), additional action is demanded. I look through the lens of "if this is the most important century, representativeness is urgent" to illustrate the point. 

I could as have well, and maybe more accurately, called this article "An long-termist argument for the importance of EA's representativeness based on values-lock-in"

 

1.

I think it's a thornier question when it comes to whether or not raising the representativeness of EA becomes more important relative to other long-term cause areas. The answer here would depend on the timeline of different long-termist issues, and the degree of lock-in each of them have.

  • Lock-in: If lock-in is stronger in decisions driven by value judgments than in decisions driven by scientific understanding, then representativeness increases in importance relative to recruiting scientific talent. Or the converse
  • Timelines: Imagine that in an  "EA business as usual" approach (e.g., not the most important century) it takes 30 years to attract the best scientific talent and 300 years to make EA representative. But in a "most important century approach" it takes 10 years to attract the best talent, and 10 years to make EA representative. Then "making EA representative" has likely increased in importance relative to "attracting the best scientific talent" as a result of it being the most important century. (My sense is that something like this is the case)

I don't have a strong view on this, and it could make for some interesting analysis!