I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.


Sorted by New
· 2y ago · 1m read


Topic Contributions

Answer by calebpDec 13, 202219

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

Answer by calebpDec 09, 20234

I'd like to hear his advice for smart undergrads who want to build their own similarly deep models in important areas which haven't been thought about very much e.g. take-off speeds, the influence of pre-AGI systems on the economy, the moral value of insects, preparing for digital minds (ideally including specific exercises/topics/reading/etc.).

I'm particularly interested in how he formed good economic intuitions, as they seem to come up a lot in his thinking/writing.

Thanks! To be clear, this is a 'plan' instead of something we are 100% committed to delivering on in the way it's presented below. I think there are some updates to be made here, but I would feel bad if you made large irreversible decisions based on this post. We will almost certainly have a more official announcement if we do decide to commit to this plan.

I agree with the overall point, though I am not I've seen much empirical evidence for the GHD as a good starting point claim (or at least I think it's often overstated). I got into EA stuff though GHD, but, this may have just been because there were a lot more GHD/EA intro materials at the time. I think that the eco-system is now a lot more developed and I wouldn't be surprised if GHD didn't have much of an edge over cause first outreach (for AW or x-risk).

Maybe our analysis should be focussed on EA principles, but the interventions themselves can be branded however they like? E.g. We're happy to fund GHD giving games because we believe that they contribute to promoting caring about impartiality and cost-effectiveness in doing good - but they don't get much of a boost or penalty from being GHD giving games (as opposed to some other suitable cause area).

I haven't come across any good non-EA GHD student groups. Remember that they need to beat the bar of current uni EA groups (that can get funding from Open Phil) from a GHD perspective - which I think is somewhat of a high bar.

If a GHW meta grantmaker provides startup funding to a new charity, and as a result that charity ends up diverting $1MM a year from ~ineffective charities to ~0.5X GiveWell work, the value is equivalent to donating ~$500K/year to a GiveWell top charity.

I don't think this reasoning checks out. GiveWell interventions also get lots of money from non-EA sources (e.g. AMF). It might be the case that top GiveWell charities are unusually hard to fundraise for from non-EA sources relative to 98% charities, though I'm not sure why that would be the case, and a 98th% intervention could end up being much less cost-effective in real terms.

(A few more responses to your comment)

There are wide grey areas when attempting to delineate principles-first EA from cause-specific EA and the effective giving examples in this post stand out to me as one thorny area. I think it may make sense not to fund an AI-specific or an animal-specific effective giving project through EAIF (and the LTTF and AWF are more appropriate), but an effective giving project that e.g. takes a longtermist approach or is focused on near-term human and nonhuman welfare seems different to me. Put differently: How do you think about projects that don't cover all of EA, but also aren't limited to one cause area?

I think it's fine for us to evaluate projects that don't cover all of EA. I think the thing we want to avoid is funding things that are clearly focused on a specific cause area. We can always transfer grants to other funds in EA Funds if it's a bit confusing for the applicant. In the examples that you gave, the LTFF would evaluate the AI-specific thing, but the EAIF is probably a better fit for the neartermist cross-cause fundraising.

Maybe Lightspeed? But I worry there isn't currently other coverage for funding needs of this sort.

I don't think this is open right now, and it's not clear when it will be open again.

I'm worried about people couching cause-specific projects as principles-first, but there is already a heavy tide pushing people to couch principles-first projects as x-risk-specific, so this might not be a concern.

Yes, I'm worried about this too.

I think this is pretty unclear; I think we'd mostly be looking for people who are using EA principles to guide their career decision-making (scope sensitivity, impartiality etc.) as opposed to thinking primarily about future cause areas. I agree it's fuzzy, though I don't want to share concrete criteria. I am excited about here out of worries of goodharting.

Ultimately, we can transfer apps between funds, so it's not a huge deal. I think at 75:25 should probably apply to EAIF (my very off-the-cuff view).

I feel pretty good about surveying donors and allocating some proportion of funding based on that. Ultimately, I don't think it's low integrity or misleading for us to change directions towards meta work on the GHDF if we are still appealing to the values on our website - though I think the specifics of the arrangement matter a lot.

The main issue (imo) is that it's unclear that meta GHDF work is competitive with just donating to GiveWell charities. Conversations with Open Phil GHW have made me a bit less enthusiastic about this direction.

What is the ToC for meta Global Health work?

** Find excellent people who can work at existing direct orgs? ** GHD doesn't seem particularly leveraged career-wise right now. Most career opportunities for people in high-income countries (where EA is most prevalent) seem fairly unexciting (particularly) junior roles. I could imagine mid/late career meta work is pretty exciting, but I haven't seen many fundable projects in this area. If you are excited about working on mid/late field building in any cause area, please apply to the EAIF!

** Find people who can start new fundraising orgs?** Open Phil is currently funding projects in this area; EAIF also funds projects in this area (and will continue to do so if they work in multiple cause areas).

** Find people who can start new direct charities?** I am most compelled by meta work for Animal Welfare, where it seems like new initiatives could beat the best animal interventions we know. To the best of my knowledge, I don't think that new GHW charities have had much luck beating the best GiveWell charities (by a GiveWell-type view's lights). Ofc, you could disagree with GiveWell's worldview; I have some disagreements, though I haven't seen well-reasoned improvements.

Thanks for your comment. I’m not able to respond to the whole comment right now but I think the bio career grant is squarely in the scope of the LTFF.

(I don't work on the Animal Welfare Fund directly)

I think hiring a fundraiser for EA Funds could make sense (particularly if we were able to make a quick hire for giving season before deciding about a longer commitment); feel free to refer fundraisers that you have a high opinion of.

I don't think we have the capacity to run a proper hiring round for a fundraiser right now.

Load more