calebp

1385Cambridge, UKJoined Oct 2018

Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Comments
144

Topic Contributions
6

Answer by calebpDec 13, 2022199

Hi Markus,

For context I run EA Funds, which includes the EAIF (though the EAIF is chaired by Max Daniel not me). We are still paying out grants to our grantees — though we have been slower than usual (particularly for large grants). We are also still evaluating applications and giving decisions to applicants (though this is also slower than usual). 

We have communicated this to the majority of our grantees, but if you or anyone else reading this urgently needs a funding decision (in the next two weeks), please email caleb [at] effectivealtruismfunds [dot] org with URGENT in the subject line, and I will see what I can do. Please also include:

  • Please also include the name of the application (from previous funds email subject lines),
  • the reason the request is urgent,
  • latest decision and payout dates that would work for you - such that if we can’t make these dates there is little reason to make the grant.

You can also apply to one of Open Phil’s programs; in particular, Open Philanthropy’s program for grantees affected by the collapse of the FTX Future Fund may be particularly of note to people applying to EA Funds due to the FTX crash.

calebp1mo118


Seems fine to ask this question here but reaching out to the below email also seems reasonable.

Media requests, donor and any other queries: fhiadminassistant@philosophy.ox.ac.uk

calebp1mo145

Sure but then you need to make a case for why you would prioritise this over anything else that you think has good consequences, I think the com health statement tries to make that argument (though it's not fully specified) whereas a statement like "we want to do x because x is bad" doesn't really help me understand why they want to prioritise x.

calebp2mo2818

If someone's actions are truth-seeking, they are trying to actually work out what is true as opposed to trying to defend their current beliefs or 'win' an argument. It is pretty linked to the scout mindset. It's plausible that others use this term differently - but afaik this isn't an unusual way of using it.

I think that you didn't exhibit this quality well in your post (e.g. you open by claiming that you are trying to answer a narrow question whilst writing a critique of the Atlas program) and this can get in the way of good discourse. I do think there were good things about the post and I think there's a version of this post with most of the main points that I would have really liked.

calebp2mo2521

(1) I am pretty into people criticising orgs/people when the author really cares about 'truth-seeking'.

(2) I wrote my comment from the perspective that you weren't trying to make a point about Atlas with your post and were instead trying to ask a 'genuine' question. I am pretty into people being able to work out why people run their projects the way they do - and I think a good way of doing this is just asking them directly.

It seems like you are trying to sound like you are doing (2) but actually doing something like (1) - which means you don't really fulfil the 'truth-seeking' criteria (for me) that would have made me excited about your post.

calebp2mo31

Actually - I do think it is reasonable to think of me as a funder. I do have input of various grants and spend some time doing grant evaluation, though as you pointed out I do also spend time doing non-grant evaluation tasks as part of my work.

calebp2mo4459

I think that you could have emailed the Atlas fellowship at the email listed here with this question. I suspect this would have better achieved your intention of not causing controversy.

calebp2mo146

If the money for EA Funds comes from donors who have the impression the fund is allocated in a technocratic way do you still think it is a reasonable compromise for EA Funds to become more democratic? It seems low intergrity for an entity to raise funding after communicating a fairly specific model for how the funding will be used and then change it's mind and spend it on a different program (unless we have made it pretty clear upfront that we might do other programs).

If the suggestion is to start a new fund that does not use existing donations that seems more reasonable to me, but then I don't think that EA Funds has a substantial advantage in doing this over other organisations with similarly competent staff.

calebp2mo61

I think my issues with this response and linking to that paper are better explained by looking at this post from SSC (beware the man of one study). To be clear I think we can learn things from the sources you linked - my issue is with the (imo) overconfidence and claims about what "the science" says.
 
 

calebp2mo32

I find writing pretty hard and I imagine it was quite a task to compile all of these thoughts, thanks for doing that.

 I only read the very first section (on epistemic health) but I found it pretty confusing. I did try and find explanations in the rest of the epistemics section.



EA’s focus on epistemics is almost exclusively directed towards individualistic issues like minimising the impact of cognitive biases and cultivating a Scout Mindset. The movement strongly emphasises intelligence, both in general and especially that of particular “thought-leaders” . An epistemically healthy community seems to be created by acquiring maximally-rational, intelligent, and knowledgeable individuals, with social considerations given second place. Unfortunately, the science does not bear this out. The quality of an epistemic community does not boil down to the de-biasing and training of individuals;[3] more important factors appear to be the community’s composition, its socio-economic structure, and its cultural norms.[4]
 

The footnotes and sources that you linked to don't give me much evidence to update towards your position and saying "the science says x" (at least to me) implies that there is some kind of consensus view within the literature which I think you should be able to point to. This reads more like your hot takes rather than something you have thought about deeply. Superforecasting (footnote 4) does talk a bit about prediction markets but much more of the book is focused on how a few people with certain traits can beat most people at forecasting which I think runs counter to the point you are making so it seems misleading to me to link to it as if it supports your view.

I think it can be fine to give hot takes but I feel like the general vibe of the post was trying to persuade me rather than explain your view. Things that might have helped are focusing on a smaller set of points and trying to make a more rigorous case for them or communicating that you are not very confident in many of the key points if that is the case. I also felt like you were trying to make a claim that the 'science' supports your view - which based on the sources you linked to is really hard to verify.
 

I don't think everything you wrote was clearly incorrect but in my view you made strong claims without demonstrating appropriate epistemic rigour.

Load more