Joey

Co-founder @ Charity Entrepreneurship
Working (6-15 years of experience)
3837Queen's Park, London, UKJoined Sep 2014

Bio

I want to make the biggest positive difference in the world that I can. My mission is to cause more effective charities to exist in the world by connecting talented individuals with high-impact intervention opportunities. This is why I co-founded the organisation Charity Entrepreneurship to achieve this through an extensive research process and incubation program.

Comments
169

Launching a donor circle for mental health

Right now the door is pretty open. The projects we would consider are ones that can make a case for being highly impactful relative to other options in the space. I suspect projects with large funding gaps would be less of a good fit (e.g., people seeking over $500k).

Jobs at EA-organizations are overpaid, here is why

So I think this conversation might be more productive if we clarified some terminology/dove into the specifics. There are a lot of different ways to set salaries in general.

  • Needs of the employee
  • Resources the organization has
  • Market rate including benefits (how desirable the job is - e.g. hedge funds pay loads but are stressful so need to pay more to make up for that)
  • Amount for the employee to be psychologically content
  • Amount that creates the best incentives for the organization/EA movement
  • Market rate replacement (if someone left, what you’d have to pay to get someone equally talented)
  • Pure market-rate earnings (what would be the highest salary job rate- not taking into account non-salary benefits - e.g. a hedge fund salary)
  • Value in impact to the organization

These varying ways cause a pretty dramatically wide spectrum of possible salaries. There is a case for using basically any of them. Ballpark numbers might range from 40k-400k depending on which system you use.

I think a lot of people are conflating the conversation a bit, there seem to be two central questions; 1) which of the systems (or index of systems) that’s best to use, and 2) pragmatically, what do these systems look like when cashed out?

For example, Josh’s comment is getting at number 1; maybe we should be using “pure market rate earnings” or “value in impact to the organization” instead of “amount that creates best incentives”.

Ryan’s comment on the other hand is basically “the ideal incentives” might in fact correlate quite a lot to the resources the organization has.

I think splitting these out can make it easier to discuss each possibility.

Deference Culture in EA

Hey Stefan,

Thanks for the comment, I think this describes a pretty common view in EA that I want to push back against.

Let's start with the question of how much you have found practical criticism of EA valuable. When I see posts like this or this, I see them as significantly higher value than those individuals deferring to large EA orgs. Moving to a more practical example; older/more experienced organizations/people actually recommended against many organizations (CE being one of them and FTX being another). These organizations’ actions and projects seem pretty insanely high value relative to others, for example, a chapter leader who basically follows the same script (a pattern I definitely personally could have fallen into). I think something that is often forgotten about is the extremely high upside value of doing something outside of the Overton window, even if it has a higher chance of failure. You could also take a hypothetical, historical perspective on this; e.g. if EA deferred to only GiveWell or only to more traditional philanthropic actors, how impactful would this have been?.

Moving a bit more to the philosophical side, I do think you should put the same weight on your views as other epistemic peers. However, I think there are some pretty huge ethical and meta epistemic assumptions that a lot of people do not realize they are deferring to when going with what a large organization or experienced EA thinks. Most people feel pretty positive when deferring based on expertise (e.g. “this doctor knows what a CAT scan looks like better than me”, or “Givewell has considered the impact effects of malaria much more than me”). I think these sorts of situations lend themselves to higher deference. Something like “how much ethical value do I prescribe to animals”, or “what is my tradeoff of income to health” are; 1) way less considered, and 2) much harder to gain clarification on from deeper research. I see a lot of deferrals based on this sort of thing e.g. assumptions that GiveWell or GPI do not have pretty strong baseline ethical and epistemic assumptions.

I think the amount of hours spent thinking about an issue is a somewhat useful factor to consider (among many others) but is often used as a pretty strong proxy without regards to other factors; e.g. selection effects (GPI is going to hire people with a set of specific viewpoints coming in), or communication effects (e.g. I engaged considerably less in EA when I thought direct work was the most impactful thing, compared to when I thought meta was the most important thing.). I have also seen many cases where people make big assumptions about how much consideration has in fact been put into a given topic relative to its hours (e.g. many people assume more careful, broad-based cause consideration has been done than really has been done. When you have a more detailed view of what different EA organizations are working on, you see a different picture.).

Why should I care about insects?

As someone who has been concerned about insects as an area for years, I think the aspect that stops animal-focused people I speak to from engaging with insects as a cause area is not really to do with scale or neglectedness. Many vegans do not eat honey; suggesting a concern for the bees creating it, and SWP (https://www.shrimpwelfareproject.org/) has gotten quite a lot of support from the animal movement. The issue is pretty directly tied to tractability and concrete actions that can be taken. If the current inventions focused on insects are research-orientated with unclear pathways for how insects do in fact get helped, that will be a blocking factor for many EA animal advocates. I think in many cases right now, people see insect welfare much like wild animal suffering; as an interesting, high scale area with no clear significant actions that can be taken.

Demandingness and Time/Money Tradeoffs are Orthogonal

I quite like this idea, and many of the most frugal people I know also do a ton of these things as well. I think a bunch of them pretty clearly signal altruism. Interestingly, I would say that things that make EA soft and cushy financially seem to cross apply to non-financial areas as well. E.g. I am not sure the average EA is working more hours compared to what they worked 5 years ago; even with the increases in salary, PAs and time to money tradeoffs.

I also agree there are a lot more that could be listed. I think "leave a fun and/or high-status job for an unpleasant and/or low-status one" hints at the idea of decisions that need to be made with competing values. I think this is maybe the biggest way more dedicated EAs have really different impacts vs less dedicated ones, e.g. it may not be the biggest part of someones impact if someone works 5% more or takes 5% less salary but it correlates (due to selection effect) with when hard choices come up with impact on one side and personal benefit on the other. The person is more likely to pick impact and this can lead to huge differences in impact. E.g. The charity research I find most fun to do might have ~0 impact whereas research I think is the highest impact might be considerably less fun, but significantly more valuable.

We need more nuance regarding funding gaps

Indeed this is only considering nonprofit funding sources. I think the data would be quite different if also considering for-profit options.

We need more nuance regarding funding gaps

Keen to hear about any data on this topic, James is right it is the number of ~EA funders with unique perspectives. 

Should EA be explicitly long-termist or uncommitted?

"Organisations should be open about where they stand in relation to long-termism."

Agree strongly with this. One of the most common reasons I hear for people reacting negatively to EA is feeling tricked by self-described "cause open organizations" that really just focus on a single issue (normally AI).

Democratising Risk - or how EA deals with critics

"Please don't criticize central figures in EA because it may lead to an inability to secure EA funding?" I have heard this multiple times from different sources in EA. 

Load More