Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Posts
24

Sorted by New
3
calebp
· 2y ago · 1m read

Comments
294

Topic contributions
6

Answer by calebpDec 13, 202219
2
0

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

I could see people upvoting this post because they think it should be more like -10 than -21. I personally don't see it as concerning that it's "only" on -21.

Sorry, it wasn't clear. The reference class I had in mind was cause prio focussed resources on the EA forum.

I think people/orgs do some amount of this, but it's kind of a pain to share them publicly. I prefer to share this kind of stuff with specific people in Google Docs, in in-person conversations, or on Slack.

I also worry somewhat about people deferring to random cause prio posts, and I'd guess that on the current margin, more cause prio posts that are around the current median in quality make the situation worse rather than better (though I could see it going either way).

Thanks for writing this.

I disagree with quite a few points in the total utilitarianism section, but zooming out slightly, I think that total utilitarians should generally still support alignment work (and potentially an AI pause/slow down) to preserve option value. If it turns out that AIs are moral patients and that it would be good for them to spread into the universe optimising for values that don't look particularly human, we can still (in principle) do that. This is compatible with thinking that alignment from a total utilitarian perspective is ~neutral - but it's not clear that you agree with this from the post.

I'm interested in examples of this if you have them.

Oh, I thought you might have suggested the live thing before, my mistake. Maybe I should have just given the 90-day figure above.

(That approach seems reasonable to me)

I answered the first questions above in an edit of the original comment. I’m pretty sure when I re-ran the analysis with decided in last 30 days it didn’t change the results significantly (though I’ll try and recheck this later this week - in our current setup it’s a bit more complicated to work out than the stats I gave above).

I also checked to make sure that only looking at resolved applications and only looking at open applications didn’t make a large difference to the numbers I gave above (in general, the differences were 0-10 days).

Load more