Joey

Co-founder @ Charity Entrepreneurship
Working (6-15 years of experience)
4130Queen's Park, London, UKJoined Sep 2014

Bio

I want to make the biggest positive difference in the world that I can. My mission is to cause more effective charities to exist in the world by connecting talented individuals with high-impact intervention opportunities. This is why I co-founded the organisation Charity Entrepreneurship to achieve this through an extensive research process and incubation program.

Comments
176

Currently: Currently we have a backend CEA that evaluates the possible scenarios and impact outcomes for each of the charities. It starts out with pretty wide confidence intervals but tends to narrow as the charities get older (e.g., 2nd or 3rd year). We also write up more narrative reviews that go to a set of external advisors. 

Long term plan: Longer term we want to hire an external evaluation organization to evaluate every charity we found two years after founding, and use those numbers instead of internal ones. 

Compared to other movements it seems pretty good; relative to the ideal, we of course could do better. In general, I think encouraging more critical thinking and debate is likely a step in the right direction.  Right now I think disagreements can be handled a bit indirectly (e.g., I would love to see even more open cause area debates instead of just funding of outreach in one area and not another).

Our policy regarding salaries has not changed as much as other meta charities; leanness tends to attract a different sort of applicant. We have a range ($40-$60k) but would consider applications from candidates who need higher than that range. In practice, we have often found the most talented candidates are less concerned with salary and more concerned about other factors (impact of the role, culture, flexibility, etc.). We are a bit skeptical about the perception that talent increases from offering higher salaries (instead of attracting new talent, we typically see the same EA people getting job roles but just for a higher cost). 

This in many ways is the default path for how many NGOs grow. I think there are quite a few reasons why CE overperforms relative to this. Decentralization broadens the risk profile that each charity is able to take, and smaller organizations move far, far quicker. I suspect the biggest factor though, is not structural but social. The level of founders we get applying are really strong relative to an organization like CE hiring program directors. Due to the psychology of ownership they work far more effectively for their project than they would as an employee of a larger organization. 

I think something talking about the concept of cause X , or an area we think is a top contender that many EAs have not yet considered deeply (e.g., family planning). Even with the recent challenge prize on this, I think EA is way over-indexed on exploit vs. explore when it comes to cause areas.  

I think there are a few things that fit into this category, how much deference is in the EA space would be one.  Another would be the relative importance of high-absorbency career paths. Some things we have not written about but also fit would be how EA deals with low evidence base/feedback loop spaces. Or how little skepticism is applied to EA meta charities.

Answer by JoeySep 30, 2022222

We try to keep a page with information (including room for funding numbers) for the organisations that get founded through Charity Entrepreneurship. Many of them are in a situation where marginal, small donors could make an impact.

Right now the door is pretty open. The projects we would consider are ones that can make a case for being highly impactful relative to other options in the space. I suspect projects with large funding gaps would be less of a good fit (e.g., people seeking over $500k).

So I think this conversation might be more productive if we clarified some terminology/dove into the specifics. There are a lot of different ways to set salaries in general.

  • Needs of the employee
  • Resources the organization has
  • Market rate including benefits (how desirable the job is - e.g. hedge funds pay loads but are stressful so need to pay more to make up for that)
  • Amount for the employee to be psychologically content
  • Amount that creates the best incentives for the organization/EA movement
  • Market rate replacement (if someone left, what you’d have to pay to get someone equally talented)
  • Pure market-rate earnings (what would be the highest salary job rate- not taking into account non-salary benefits - e.g. a hedge fund salary)
  • Value in impact to the organization

These varying ways cause a pretty dramatically wide spectrum of possible salaries. There is a case for using basically any of them. Ballpark numbers might range from 40k-400k depending on which system you use.

I think a lot of people are conflating the conversation a bit, there seem to be two central questions; 1) which of the systems (or index of systems) that’s best to use, and 2) pragmatically, what do these systems look like when cashed out?

For example, Josh’s comment is getting at number 1; maybe we should be using “pure market rate earnings” or “value in impact to the organization” instead of “amount that creates best incentives”.

Ryan’s comment on the other hand is basically “the ideal incentives” might in fact correlate quite a lot to the resources the organization has.

I think splitting these out can make it easier to discuss each possibility.

Hey Stefan,

Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.

Let's start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.

Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. “this doctor knows what a CAT scan looks like better than me”, or “Givewell has considered the impact effects of malaria much more than me”). I think these sorts of situations lend themselves to higher deference. Something like “how much ethical value do I prescribe to animals”, or “what is my tradeoff of income to health” are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.

I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).

Load More