calebp

1106Cambridge, UKJoined Oct 2018

Bio

I currently lead EA funds.

Before that, I worked on improving epistemics in the EA community at CEA (as a contractor), as a research assistant at the Global Priorities Institute, on community building, and Global Health Policy.

Unless explicitly stated otherwise, opinions are my own, not my employer's.

You can give me positive and negative feedback here.

Comments
114

Topic Contributions
6

A quickly written model of epistemic health in EA groups I sometimes refer to

I think that many well intentioned ea groups do a bad job cultivating good epistemics. By this I roughly mean the culture of the group does not differentially advantage truthseekjng discussions or other behaviours that helps us figure out what is actually true as opposed to what is convenient or feels nice.

I think that one of the main reasons for this is poor gatekeeping of ea spaces. I do think groups do more and more gate keeping, but they are often not selecting on epistemics as hard as I think they should be. I’m going to say why I think this is and then gesture at some things that might improve the situation. I’m not going to talk at this tjme about why I think it’s important - but I do think it’s really really important.

EA group leaders often exercise a decent amount of control over who should be part of their group (which I think is great). Unfortunately, it’s much easier to evaluate what conclusion a person has come to, than how good were their reasoning processes. So what does a person actually say they think becomes the filter for who gets to be in the group as opposed to how do they think. Intuitively you’d expect a positive feedback loop where groups become worse and worse epistemically as people are incentivised to think a certain way to be part of the group and future group leaders are drawn from a pool of people with bad epistemics and then reinforce this.

If my model is right there are I a few practical takeaways • be really careful about who you make a group leader or get to start a group (you can easily miss a lot of upside that’s hard to undo later) • make it very clear that your EA group is a place for truth seeking discussion potentially at the expense of being welcoming or inclusive • make rationality/epistemics a more core part of what your group values, idk exactly how to do that - I think a lot of this is making it clear that this is what your group is in part about

I’m hoping to have some better takes on this later, I would strongly encourage the cea groups team to think about this along with other ea group leaders. I don’t think many people are working in this area though I’d also be sad if people fill up the space with low quality content so think really hard about it and try to be careful about what you post.

Answer by calebpNov 08, 202231

Hey, I run EA Funds (so feel well placed to answer your question).

We currently don’t use donor money to cover our operational costs so in some sense we are very efficient.

Of course, making a donation does create some costs for us (as our team needs to work out where to send your money) but I think overall our team is selected to try and make this decision well so outsourcing the cognitive labour of working out where your money is better spent is plausibly worthwhile (and I expect to see economies of scale when doing this ). To be clear we would be very excited about receiving your donation (and that would be my recommendation).

I also think that the performance of the funds are higher than that of the median org we fund as we are pretty hits based in our approach, and we have access to more information than I expect our donors to.

Thanks for catching this! I'll ask the team to debug it.

Congratulations on launching your new organisation!

When I read your post I realised that I was confused by a few things:

(A) It seems like you think that there hasn't been enough optimisation pressure going into the causes that EA is currently focussed on (and possibly that 'systematic research' is the only/best way to get sufficient levels of optimisation.
 

EA’s three big causes (i.e. global health, animal welfare and AI risk) were not chosen by systematic research, but by historical happenstance (e.g. Peter Singer being a strong supporter of animal rights, or the Future of Humanity Institute influencing the early EA movement in Oxford).

I think this is probably wrong for a few reasons
1. There are quite a few examples of people switching between cause areas (e.g. Holden, Will M, Toby Ord moving from GHD to Longtermism). Also, organisations seem to have historically done a decent amount of pivoting (GiveWell -> GiveWell Labs/ Open Phil, 80k spinning out ACE ...).
 

2. Finding cause x has been a meme for a pretty long time and I think looking for new cause/project etc. has been pretty baked in to EA since the start. I think we just haven't found better things because the things we currently have are very good according to sone worldview.

3. My impression is that many EAs (particularly EAs that are highly involved)  have done cause prioritisation themselves. Maybe not to the rigour that you would like but my sense is that many community members doing this work themselves and then doing some aggregation by looking at what people end up doing gives some data (although I agree it's not perfect). To some degree cause exploration happens by default in EA.
 


(B) I am also a bit confused why the goal or proxy goal is find a cause every 3 years? Is it 3 rather than 1 or 6 due to resource constraints or is this number mostly determined by some a priori sense of how many causes their 'should' be.

(C)  Minor: You said that EAs big 3 cause areas are global health, animal welfare and AI risk. I am not sure what the natural way or carving up the cause area space is but I'd guess that Bio security should also be on this list. Maybe something pointing at meta EA depending on what you think of as a 'cause'.

 

Added a link to some instruction in original comment
https://www.simongrimm.com/fermi-poker-instructions/

This is a great list! Thanks for making it, I will definitely be sending this to a few people.

Some things to consder adding
* Poker
* Fermi estimate poker (although I haven't played this before)
* Estimathon (requires some setup and I don't know of a good public explainer)

I think that having a decent database of fermi estimate questions would be pretty great, maybe you could make a bounty for this? I have also found it pretty difficult to come up with enough 'good' questions to do for events/personal training. I really like that you made your list of lists public!

This would be awesome!

I could imagine some people not liking this as it might make the forum a more intimidating place to post to. I imagine that the kind of person who says this would have less of an issue with:
* people opting in by making the post a non-personal post
* people opting in by adding a "check-me" tag

Another mechanism could be for the forum team to pay out $25 bounties when people falsify claims (as a way to incentivise this kind of checking), and maybe take some of the authors karma.

Re optimism bias

Towards the top of the post I think you made a claim that EAs are often very optimistic (particularly agentic one’s doing ambitious things or in ‘elitist’ positions).

I just wanted to flag that this isn’t my impression of many EAs who I think are doing ambitious projects, I think a disproportionate number of agentic people I know in EA are pretty pessimistic in general.

I think the optimism thing and something like desire to try hard/motivation/ enthusiasm for projects are getting a bit confused here, but low confidence.

Load More