All of Joey's Comments + Replies

Joey
2d31
4
2

First a meta note less directly connected to the response:

Our funding circles fund a lot of different groups, and there is no joint pot, so it's closer to a moderated discussion about a given cause area than CE/AIM making granting calls. We are not looking for people to donate to us or our charities, and as far as I understand, OpenPhil and AWF do not have a participatory way to get involved other than just donating to their joint pot directly. This request is more aimed at people who want to put in significant personal time to making decisions independent... (read more)

4
Jason
15h
Do you think there are additional steps you could/should take to make this philosophy / these limitations clearer to would-be to those who come across your reports? I strongly support more transparency and more release of materials (including less polished work product), but I think it is essential that the would-be secondary user is well aware of the limitations. This could include (e.g.) noting the amount of time spent on the report, the intended audience and use case for the report, the amount of reliance upon which you intend that audience to place on the report, any additional research you expect that intended audience to take before relying on the report, and the presence of any significant issues / weaknesses that may be of particular concern to either the intended audience or anticipated secondary users. If you specifically do not intend to correct any errors discovered after a certain time (e.g., after the idea was used or removed from recommended options), it would probably be good to state that as well.
  1. It would be helpful if you engaged with the plagiarism claims, because it is concerning that CE is running researcher training programs while failing to handle that well. I agree with the rest of what you say here as being tricky, but think that it is pretty bad that you publish the low confidence research publicly, and it's led to confusion in the animal space.
  2. + 2.5 - I think if your ordering is significantly different, it's probably fairly different than most people in the space, so that's somewhat surprising/an indicator that lots of feedback isn't reac
... (read more)

Hey EA Sweden team, really interesting post. Quick question: is there a link to your full budget somewhere? I am a bit unsure if $65k is like 50% or 5% of your current spend, and it's pretty hard to get a sense of cost effectiveness without knowing what the total expected spend is.

5
Emil Wasteson
4d
Hi Joey! Our total budget for the year is $355k, so the mentioned funding gap constitutes 18% of the total budget. A rough breakdown of the expected costs: * Staff: 60% * Office & co-working space: 15% * Projects and community events: 10% * Operational and administrative costs: 12.5% * Other costs: 2.5%

I have a couple of guesses:

Every year, we generally become more well-known, particularly within the communities we consistently reach out to (e.g., EA, animal welfare, etc.). This creates natural momentum and credibility within those communities.

Our previous outreach efforts build up the applicant pool for the current year (e.g., someone who heard about us from a talk two years ago might only apply today).

We have done a lot more active outreach to non-EA communities. I think these communities are particularly affected by the visible success of our graduate... (read more)

Thanks! 

To me, one of the major stories here is that you have managed to break free of an important limiting factor, at least for the present. That is worth celebrating and learning from.

To quickly chip in with some data I have, people who were pretty happy with the program, scoring it 4.45/5. About half of them received a job offer, placement, or internship following the program, most of which were facilitated or recommended by us. We have not done a full counterfactual estimate yet, though, as I do think talented people often get offers anyway, even without the extra skills/credentials that the program provides. So, it might be counterfactually closer to 33%.

Joey
3mo32
4
2
3
2

Hey Silas, really glad you wrote this up. I also recently donated bone marrow (after donating blood many times and being a bit torn on kidney donations). My experience was equally positive and probably even easier logistically (from London, UK).

Some hard-nosed calculations for those who might be interested (that I will write up in a full post one day): I lost about 1 full day of work and would expect the average person to lose between 1-3 days of work if they wanted to lose as few workdays as possible. My best estimate is this saved between 4-12 years of l... (read more)

2
Silas Strawn
3mo
Hey, glad to hear you had a good experience with it as well. Your post really resonated with me - I certainly feel more inclined to donate in other ways since I donated PBSC. Most notably, it pushed me more towards kidney donation, although I'm still on the fence about it too. 
Joey
3mo48
10
0
6

Just wanted to chip in that I am quite positive about this choice and the direction that CEA could go in under Zach's leadership. I have found Zach to be thoughtful about a range of EA topics and to hold to core principles in a way that is both rare and highly valuable for the leader of a meta organization.

Do you know anything about the strategic vision that Zach has for CEA? Or is this just meant to be a positive endorsement of Zach's character/judgment? 

(Both are useful; just want to make sure that the distinction between them is clear). 

Joey
3mo12
1
0
1

I think the model I would suggest is indeed close to what Joel is saying - try it out system as opposed to guessing a priori how you will be affected by things. More specifically, track your work hours/productivity (if you think that is where the bulk of your impact is coming from) and see if, for example, donating blood on the weekend negatively, positively, or has no effect on them. I think that my output has gotten higher over time, in part, due to pretty active testing and higher amounts of measurement. - Related post 

Joey
3mo19
15
2

I do tend to think that most people's limiting factor is energy instead of time. E.g. it is rare to see someone work till they literally run out of hours on a project vs needing a break due to feeling tired. Even people working 12 hours a day, I still expect they run out of energy before time, at least long term. I would not typically see emotional energy as my limiting factor, but I do think it's basically always energy (a variable typically positively affected by altruism in other areas) vs. time or money (typically negatively affected).

energy (a variable typically positively affected by altruism in other areas)

 

This assumption seems totally out of left field to me. I agree altruism can increase energy, but in many other cases it uses it up. 

In most cases the same or minor decrease

I echo this view and think it's really exciting. I expect many people in the meta-funding space will be positive about this idea. However, I also anticipate that many of the donors will need to see a round or two of this idea executed and observe the resulting grants before donating to the fund.

2
Sjir Hoeijmakers
4mo
Agreed (though personally I might be willing to make a bet if e.g. fund manager selection is done well)

As shown in this table 0% of CE staff (including me) identify AI as their top cause area. I think across the team people's reasons are varied but cluster around something close to epistemic scepticism. My personal perspective is also in line with that.

2
ElliotJDavies
6mo
I really want to get to the bottom of this, because it seems like the dominant consideration here (i.e. the crux).  Not a top cause area ≠ Not important  At the risk of being too direct, do you as an individual, believe AI safety is an important cause area for EA's to be working on? 
Answer by JoeyOct 04, 202349
8
0

Hey Yanni!

Quick response from CE here as we have some insight on this: 

a) CE is not funding-limited and does not think AI is an area we will work on in the future, regardless of available funding in the space (we have been offered funding for this many times in the past). You can see a little bit about our cause prioritization here and here

b) There are tons of organizations that aim or have aimed to do this, including Rethink PrioritiesImpact AcademyCenter for Effective Altruism and the Longtermist Entrepreneur... (read more)

You've given lots of reasons here, and cited posts which also give several reasons. However, I feel like this hasn't stated the real & genuine crux - which is that you are sceptical that AI safety is an important area to work on. 

Would you agree this is a fair summary of your perspective? 

2
DC
6mo
I'm reminded that I'm two years late on leaving an excorciating comment on the Longtermist Entrepreneurship Project postmortem. I have never been as angry at a post on here as I was at that one. I don't even know where to begin.
1
yanni
6mo
Hey Joey - this is an extremely helpful response. Thanks for making the effort! 

About 75% of seed project proposals get funded at the amount they ask for. That part is not known until after the incubation process. The typical seed grants are between $100k-$200k. I do not expect a great proposal to be stopped by a $25k higher budget. I think entrepreneurship is a higher-risk career path, one that is probably not suited for the majority of people. CE is already extremely de-risked relative to equivalents in the for-profit and incubated nonprofit space, to the point where I think the founding step is not the highest-risk part of founding a charity (having an impact 3 years down the line is).

It might be helpful to add some useful reference classes here as I think it's often forgotten how unusual EA salaries are relative to other fields. 

Average GDP of the world: £11,000 
London's living wage: £21,800
Median full-time UK employees: £26,800
Average salary nonprofit jobs: £31,700
The average annual salary in London: £39,000
Average salary nonprofit London: £39,600
Average CE employee salary: £39,300
Entry-level EA job: £48,000
Average EA job: £80,000

1
Stan Pinsent
6mo
Your source on median UK salary says "median annual pay for full-time employees was £33,000 for the tax year ending on 5 April 2022". Since then we've seen record wage increases. In the 18 months since April 2022 annual wage growth has been between 5% and 8.5%. I'm not sure if we can simply apply average wage growth to the national median wage, but it seems likely that the UK median full-time wage is now in the mid-30,000s. This still admits the broader point about EA salaries inhabiting a different world.
5
Jason
8mo
Those need to be adjusted for COL though (including that someone in the UK benefits from a modern social welfare state in the way that someone in the US does not). That is not saying that the US candidate should expect a better standard of living, only that the UK standard of living simply costs more in the US. I don't think the US is an edge case given that IIRC 30 to 35 percent of EAs on the survey live here. I'm OK with the possibility that being a CE incubatee may not be realistic for certain people in the US, but that would still be sad and should be openly discussed. For a candidate based in the US, its not clear why mostly UK based reference classes are the best choice. On mobile right now, but a quick look suggests the median salary in the US is about 55K USD and 80K in Washington DC (somewhat analogous to London). I do not think US candidates need as much of an uplift from UK as those numbers might suggest, but they are a sanity check for my view that UK reference ranges don't apply well to US candidates. I agree that EA jobs aren't really an appropriate reference class for my concern. The reference class I had in the back of my head for my comment was the minimum wage for the non-Bay city in which I live, augmented for the near-universal understanding that the legal minimum wage isn't a livable wage. That put me at 30K legal minimum plus 10K uplift = 40K livable wage for most, with recognition for special needs. It will also need an uplift for health insurance for many candidates. So I would judge a range of 40-60K generally viable for basic living expenses for a candidate where I live. Not so much 25-45K.

Thank you for posting this, Joey. I think people too often talk about things like this in the abstract, not knowing the realities of the market. Two considerations I have when considering salary:

  1. Counterfactuals: Money, at the end of the day, is a limited resource. When considering the counterfactuals, I personally would feel unethical accepting a super high salary from CE, as I think the counterfactual of this money would be, e.g., one less high-impact organization being founded. In the early days, some of our charities started with just $25K-$50K grants;
... (read more)

"My default interpretation is that someone doesn't value the role or my work very much." 

I think this is a pretty unfortunate norm that some EAs have. In practice, it results in EAs by far prioritizing the best-funded areas instead of the most impactful ones. I think the reality is that offered salaries have far more to do with funding availability and perceived counterfactuals of funding. At the end of the day, AMF can absorb more money, and thus there is a higher bar for spending in global health than there is in areas without clear benchmarks.

4
Elizabeth
8mo
Can I ask how you settled on the salary range you did? I realize that applicants have some choice in the matter, but CE clearly has a lot of power with norm-setting such that I think it's appropriate to ask.  Because I agree that it's bad to overweight funders' opinions (which is why I self-fund projects I care about), or to punish work with more measurable results. But CE has an opportunity, perhaps unique within EA, to set norms that value this work. Especially when highly-vetted people are using interventions you also vetted, which removes a lot of reasons to expect people to self-fund an experimental stage.

Salary questions and discussions always happen well before someone goes through the program (typically during the interviews or soon after an invitation is offered). Ultimately, the co-founders select how much they ask for, and many have asked for considerably higher amounts.

1
Jason
8mo
How reliably are those asks met? If someone needs (e.g.) a 25K uplift for childcare costs, when do they learn if that's actually in the cards? My understanding was that funding allocations were locked in fairly late in the program, but I could be mistaken. Even if the candidate exits prior to starting the program, they may have invested significant time, energy, and emotion into the process. I definitely understand the realities of reliance on seed funding, and the fact that some uncertainty and opacity is unavoidable as a result. It remains unfortunate in my view. [For reference, 25K is about what full-time childcare costs for a young child where I live. Some people are single parents, and many have more than one child, so I didn't think it an unreasonable test case.]

I think this is a pretty simple and incorrect model. Job desirability is considered based on many traits (salary being one, but far from the only one), with different individuals weighing those traits at different levels of importance. If salary is the most important factor for a job, CE will basically never compete given the talent requirements; if autonomy or impact is the most important trait, it will compete even at very low levels of salary.

I am pretty skeptical that this would be the best way to increase diversity in EA per dollar. I talk to quite a number of incubation programs, both inside and outside of EA, and for most forms of diversity, I do not think that a low salary is the top barrier. I think that for age diversity, there is a case for this, but for country-level diversity, it might pull in the opposite direction. My soft sense is that both CE staff and the CE cohort are unusually diverse relative to the EA movement as a whole and similar EA incubation programs, despite having low salaries.

Also, a factual update: I think most numbers right now for founders are more in the 40-60k range.

2
Jason
8mo
Yeah, I think it depends on what aspect(s) of diversity you're looking to target. My theory of change was based on the view that within-country diversity of EA is rather important, especially in the US and UK which are likely to remain important hubs for intellectual leadership and financing for at least the medium run.

I think it holds up. I wrote a highly upvoted post on organisations being transparent about their scope one month ago due to similar concerns.

So far have not seen this work well for many projects but open to the idea in theory.

2
Nathan Young
9mo
I guess in theory by RP, Founders Pledge, Schmidt Futures, Open Philanthropy and the CE Seed Network and current donors would have to agree on how they split their impact.  If that were possible can you think of any bottlenecks to CE selling impact on, say Manifund?

2021 is outdated when discussing budgets and projects we are not currently working on blog posts. With regards to the budget, I believe this is the legal minimum that has to be made public, though much of the data is combined within an EVF, making it harder to pull out specific details. I think the budget is inherently tied to the scope, as it's challenging to truly understand where an organisation is allocating its resources without this kind of basic information. For instance, if an organisation spends a large percentage of its resources on a certain area, any cause preference in that area would have a much greater impact on the overall scope of the organisation.

"I think beyond this level of transparency, CEA is probably hitting rapidly diminishing returns or taking attention away from more important topics." I would be surprised if this was the case as some pretty basic stuff is missing, e.g. I could not find a recent public budget for CEA. 

1
Oscar Howie
4mo
Circling back to say we recently published CEA's 2023 budget. Note that we project spending to be substantially under budget.
2[anonymous]9mo
In two minutes I found income/expenditure for the US and UK entities in 2021 and I think they've been a bit busy recently with other priorities. Edit: Also I thought we were discussing scope.

Hey Rob,

I wonder if filling out something like the template I laid out in this post could allow transparency without disclosing confidential details for the CEA group's team. 

Hey Anon, indeed, the categorisation is not aimed at the target audience. It’s more aimed at the number and requires specific ethical and epistemic assumptions. I think another way to dive into things would be to consider how broad vs. narrow a given suggested career trajectory is, as something like CE or Effective Altruism might be broad cause area-wise but narrow in terms of career category.

However, even in this sort of case, I think there is a way to frame things into a more answer vs. question-based framework. For example, one might ask something like:... (read more)

What concrete actions might this suggest?

I think the most salient two are connected to the other two posts I made. I think people should have a transparent scope, especially organizations where people might be surprised about their current focus and they should not use polarizing techniques. I think there are tons of further steps that could be taken; a conference for EA global health and development seems like a pretty obvious example of something that is missing in EA.

Hey, I think this is a pretty tricky thing to contemplate, partly due to organizations not being as transparent about their scope as would be ideal. However, I will try to describe why I view this as a pretty large difference. I will keep the 80k as an example.

1) Tier-based vs. prioritized order

So indeed, although both organizations list a number of cause areas, I think the way CE does it is more in tiers, e.g., there is not a suggested ranking that would encourage someone to lean towards health and development policy over family planning. On the other han... (read more)

Indeed, I think those points in the right direction, and this post by 80k stands out as one of the most clear examples of things I would like to see more of. For example, you can gather from this 80k post that ~20% of effort goes to all areas outside of Xrisk/EA meta and I think this would be quite surprising for many people in the EA community to know. However, I still think this information is not well-known or internalised by the broader community.

8
NickLaing
9mo
Wow only 20%!  That really surprises me at least, actually makes me a little nervous about directing public health bent people there like I do at the moment -  I should look into it more...

Your ordering of who has struggled with this in the past matches my sense, although I would add that I think it's particularly important for community building, meta, and EA leadership organizations. These are both the least naturally definitionally clear in what they do and have the most engagement with a counterfactually sensitive community.

Joey
9mo14
11
1

On CEA, I think a chart like the one I outlined in the post would be super useful, as I pretty constantly hear very different perspectives on what CEA does or does not do. I understand this might change with, e.g., a new ED, but even understanding 'what CEA was like in 2022' I think could provide a lot of value.

This is close to what I am saying, but I might phrase it stronger. For example, a large donor may consistently be a potential fit for your field, but I still believe it's important to be considerate about how far you push them. Similarly, a highly talented individual might require more than just signposting; they also should not be perceived as second-class or unintelligent for having a different viewpoint.

Short responses here:

Why are more people a fit for for-profit? I think for-profits require many of the same skills but far less focus on impact (even if the founder of that for-profit aims to donate their earnings). I think the M/E and fundraising requirements of an NGO are harder than the equivalent in the for-profit space.

Why 20%? It's not a deeply considered number. I can easily imagine it being 10%, although the stats I saw suggested the current population was somewhere between those two numbers. I think one could argue both that too many people are do... (read more)

1
Vincent van der Holst
10mo
I agree M/E and fundraising are harder for NGO's or even impact startups who have concessionary returns, I've seen that a lot. I'm not completely sure how that translates to more (difficult to get) skills being needed by NGO entrepreneurs than for-profit entrepreneurs. There are certainly skills an NGO entrepreneur need that for-profit entrepreneurs don't need, but I think that goes the other way too.  Thank you for the view on the 20%. I guess it depends on what you think "suits" means. To me that's people who enjoy being a founder and have higher than average odds of being a succesful founder, and with that in mind I think the number is close to 5% than 20%.  I'm very happy CE exists, because it enables people who care to start NGO's that solve problems they think are important. I think there are a lot of potential founders who want to start NGO's or impact first businesses, but don't because the organizations, people, information and infrastructure that's out there for them lacks a lot compared to those who want to start a for-profit. 

Indeed, I think having a separate non-GiveWell run global health fund would be really great. Most of the members of our seed network donate between 10k-100k, so that is also an option for folks in that range. But I do wish there were more cause-specific ones.

We have a talk on the key traits of great charity founders here, right now our vetting system is about 0.7 correlated with charity outcomes two years later. 

Hey Spencer, thanks! I'm glad the post was helpful. I think these sorts of questions are both common and pretty universally so, I'm happy to be encouraged to write up a bit more description on how CE thinks about it.

Not a ton of writing I love on the topic, but this book is one of the better ones I have read on it: https://www.amazon.com/Talent-Identify-Energizers-Creatives-Winners/dp/1250275814. We will also be publishing our foundation handbook in approximately 3 months, and that has a pretty large section on vetting.

1
SebastianSchmidt
9mo
Thanks a lot for this. I eagerly read it last year and found several valuable takeaways. Looking forward to reading the foundation handbook! Just inserting a high-level description for other readers: I expected that their perspective would be too rigid (e.g., overly reliant on rigorous research on average effects and generalizing too strongly), cynical (as opposed to humanistic and altruistic), and overly focused on intelligence. Fortunately, my expectations were off. In fact, they were highly nuanced (emphasizing the importance of judgment and context), considerate (e.g., devoting a full chapter to women and minorities), and deemphasized intelligence (taking a multiplicative model of success - although, to be clear, they still claim that intelligence is very important). That said, their theory of change is quite distinct from ours (e.g., innovation and creativity are emphasized substantially more than morality and doing the most good). I also appreciated their discussion of the evidence around intelligence, role models, and talent search in sports.

Yep, increasing this pool is a top priority, particularly outreach outside of the EA movement.

I think the entire ecosystem is important, but my sense is that mid stage is lacking most right now. I feel quite confident on seed funding, and have mixed confidence on late-stage funding, depending on the cause area.

Hey Vasco,

Love the post; I think it is super valuable to have these sorts of important conversations, directly thinking about cross-cause comparison. It’s worth noting that CE does consider cross-cause effects in all the interventions we consider/recommend, including possible animal effects and WAS effects. Despite this, CE does not come to the same conclusion as this post; here are a couple of notes on why:

Strength of evidence discounting: CEAs are not all equal when they are based on very different strengths of evidence, and I think we weight this factor... (read more)

8
Vasco Grilo
10mo
Hi Joey, Thank you so much for taking the time explain your reasons in great detail! I broadly agree with all the points you make.  Could you elaborate on how CE does this? Among the 9 CE's health reports of 2023, I only found 3 instances of the word "animal". Here (emphasis mine): Here (emphasis mine): Here (emphasis mine): Only the 1st of these refers to animal welfare, and has very little detail. Saulius commented that (emphasis mine): So cost-effectiveness used to be higher, but Saulius' updated estimate of 65 years of chicken life per dollar is 4.33 (= 65/15) times as high as the one I used in my BOTEC. If the 2019-2020 average cost-effectiveness is also about 4.33 times as high as the current marginal cost-effectiveness, my BOTEC will not be too off. I did not easily find estimates for the marginal cost-effectiveness. Kieran Greig (from RP) surveyed groups working on corporate campaigns globally, and told me roughly 1 year ago that: Are there any quantitative analyses of the marginal cost-effectiveness? Great point! It crossed my mind, but I ended up not including it. I agree this tends to be the case, but I am not sure how much. For example, I have the impression RP's median welfare ranges are higher than what most people expected a priori. In general, it seems hard to know how much to adjust estimates, and I guess it would be better to invest more resources (at the margin) into decreasing our incertainty. 1. ^ Further details are confidential: - "I apologize that I can't share too much specifically as I promised organizations that those results would be confidential".
Joey
1y11
1
0

We have thought about this but we are not confident weaker charities would not crowd out stronger ones with funders and thus lead to less overall impact. 

Joey
1y14
4
0

I think tautological measurement is a real concern for basically every meta charity, although I'm not sure I agree with your solution. I think the better solution is external evaluation, someone like GiveWell or Founders Pledge who does not have any reason to value CE charities. Typically, these organizations do their own independent research and compare it across their current portfolio of projects. If CE can, for example, fairly consistently incubate charities that GW/FP/etc. rank as best in the world, I think that is at least not organizationally tautol... (read more)

2
Grayden
1y
Thanks, Joey. Really appreciate you taking the time to engage on these questions. To be clear, I’m not seriously suggesting ignoring all research from before the decision. I’m just saying that mathematically, an independent test needs its backrest data to exclude all calibration data. It strikes me that there are broadly 3 buckets of risk / potential failure: 1. Execution risk - this is significant and you can only find out by trying, but you only really know if you’re being successful with the left hand side of the theory of change 2. Logic risk - having an external organisation take a completely fresh view should solve most of this 3. Evidence risk - even with an external organisation marking your homework, they are still probably drawing on the same pool of research and that might suffer from survivorship bias

Hey Nescio,

Sadly, my circumstances have changed such that this was no longer possible without significant work-productivity trade-offs. Specifically, I moved to London, UK (due to work) and have only intermittently been living with a partner. I now am living off a range between £20k-£30k depending on year. I still have the view that a higher salary would not significantly increase my productivity beyond that and have, if anything, more concerns about the current spending habits of EA for reasons described pretty well here.

1
nescio
1y
Thanks for your response! I'm glad I found your post – had been thinking about this topic in very similar ways for a while but hadn't seen anyone else discuss it this way (or attempt it). I do think that living with world GDP per capita is potentially an underestimate of how much one's share should be when living in a very expensive place (e.g. London). Clearly everyone should have right to housing and housing in London isn't that much more expensive to build than anywhere else (?), but world GDP per capita can't cover that. I think that adjusting one's yearly budget from that ideal point to compensate for one's area's cost of living seems reasonable. Another even more lax approach is to aim to live with 30-50th percentile income of the city/country one lives in: plenty of people do it, thus it must be possible.

Hey Vlad,

I would definitely expect some of those 1000 ideas to have been researched by Open Philanthropy or Rethink; a long list like that would include both researched and un-researched areas. I think new nonprofits often come at things with a different angle, e.g., ways of weighting evidence, or tweaks in ethical views or baseline assumptions. For example, GiveWell is both highly well-run and huge, but they would not come to the same considerations that HLI has come to by looking at subjective well-being. I think the same thing will happen with CEARCH; there are lots of areas that might be missed by other actors but that would be picked up by a more systematic search done at a lower level of depth per area.

Joey
1y16
2
0

Currently: Currently we have a backend CEA that evaluates the possible scenarios and impact outcomes for each of the charities. It starts out with pretty wide confidence intervals but tends to narrow as the charities get older (e.g., 2nd or 3rd year). We also write up more narrative reviews that go to a set of external advisors. 

Long term plan: Longer term we want to hire an external evaluation organization to evaluate every charity we found two years after founding, and use those numbers instead of internal ones. 

Compared to other movements it seems pretty good; relative to the ideal, we of course could do better. In general, I think encouraging more critical thinking and debate is likely a step in the right direction.  Right now I think disagreements can be handled a bit indirectly (e.g., I would love to see even more open cause area debates instead of just funding of outreach in one area and not another).

Joey
1y20
2
0

Our policy regarding salaries has not changed as much as other meta charities; leanness tends to attract a different sort of applicant. We have a range ($40-$60k) but would consider applications from candidates who need higher than that range. In practice, we have often found the most talented candidates are less concerned with salary and more concerned about other factors (impact of the role, culture, flexibility, etc.). We are a bit skeptical about the perception that talent increases from offering higher salaries (instead of attracting new talent, we typically see the same EA people getting job roles but just for a higher cost). 

Joey
1y23
3
0

This in many ways is the default path for how many NGOs grow. I think there are quite a few reasons why CE overperforms relative to this. Decentralization broadens the risk profile that each charity is able to take, and smaller organizations move far, far quicker. I suspect the biggest factor though, is not structural but social. The level of founders we get applying are really strong relative to an organization like CE hiring program directors. Due to the psychology of ownership they work far more effectively for their project than they would as an employee of a larger organization. 

Joey
1y14
2
0

I think something talking about the concept of cause X , or an area we think is a top contender that many EAs have not yet considered deeply (e.g., family planning). Even with the recent challenge prize on this, I think EA is way over-indexed on exploit vs. explore when it comes to cause areas.  

2
Nathan Young
1y
Reading this, I was surpised by the size of your causes. I've always thought of cause X to be something the size of X risks or global dev. Maybe I was wrong there.
Joey
1y19
4
0

I think there are a few things that fit into this category, how much deference is in the EA space would be one.  Another would be the relative importance of high-absorbency career paths. Some things we have not written about but also fit would be how EA deals with low evidence base/feedback loop spaces. Or how little skepticism is applied to EA meta charities.

Answer by JoeySep 30, 202222
2
0

We try to keep a page with information (including room for funding numbers) for the organisations that get founded through Charity Entrepreneurship. Many of them are in a situation where marginal, small donors could make an impact.

Load more