I'd estimate around 10%.
Thank you for the detailed reply, that seems surprisingly little, I hope more apply.
Also really glad to hear that OP may fund some of the MATS scholars, as the original post mentioned that "some of [the unusual funding constrain] is caused by a large number of participants of the SERI MATS program applying for funding to continue the research they started during the program, and those applications are both highly time-sensitive and of higher-than-usual quality".
Thank you again for taking the time to reply given the extreme capacity constrains
Hi Miranda, and apologies for writing semi-anonymously.
That was very helpful and wonderful to hear!
I am also very glad to hear that you keep your grant decisions independent of your outreach and fundraising plans, despite potential strong misaligned incentives, and I am relieved and immensely grateful that I was wrong to suspect otherwise.
I'm confused as to why GiveWell is not filling the current funding gap of FEM's program, given that you estimate it to be more cost-effective than the programs you are currently funding, which are 13x cash. I imagine that's because you're less uncertain about the estimates for those programs?
[I'm also curious as to why someone disagree-voted the above, if the voter is reading this I would find an explanation helpful.]
I strongly agree that the reasoning "top of funnel community building is not worth the effort, as EA roles often get 100+ applicants" is misguided. But I think the argument about many applicants being "poor applicants" because they get rejected more often is not that important compared to other reasons.
Here are 3 reasons that I think are much more relevant:
Less relevant to your main point, but I strongly want to urge readers against "poor applicants get rejected often" kind of reasoning. I see it very often in this community and I think it's greatly overrated. Some relevant links and thoughts:
This is already super long, but I also want to quickly note that 7.3% of EA-survey-respondents are from one city, which seems to indicate that there might be lots of opportunities for top of funnel community building.
Other examples in this great comment of yours.
What fraction of the best projects that you currently can't fund has applied for funding from OpenPhilantropy directly? Reading this it seems that many would qualify.
Why doesn't OpenPhilantropy fund these hyper-promising projects if, as one grantmaker writes, they are "among the best historical grant opportunities in the time that I have been active as a grantmaker?" OpenPhilantropy writes that LTFF "supported projects we often thought seemed valuable but didn’t encounter ourselves." But since the chair of the LTFF is now a Senior Program Associate at OpenPhilantropy, I assume that this does not apply to existing funding opportunities.
I think some of these numbers are way off, or at the very least misleading. For example, in your sources you use the budget for Effective Ventures to estimate the budget of CEA, but Effective Ventures includes ~10 public projects https://ev.org/organisations/ (and some less public ones like Wytham Abbey https://www.wythamabbey.org/ that they don't mention on the website)
I think it's pretty bad to publish unreliable numbers about organizations without checking with them first
Edit: the post has been edited to remove references to Effective Ventures, but still uses as a source for the $30M claim https://time.com/6204627/effective-altruism-longtermism-william-macaskill-interview/, which is from when CEA was still the name of the umbrella org of all the projects
The shaded area is between the two data points, which means that the two data points are one before and one after the intervention period.
I think you're reading the graph wrong. There are two data points, one with data collection "Dec 2020- Feb 2021" (before the intervention) and one with data collection "Dec 2021- Jan 2022" (after the intervention).[1] There are no in-between measures that would give you a "slope" from Feb ‘21 to the start of the intervention.
See page 9 of the report: "Tables: Contraceptive prevalence and unmet need"
Family Empowerment Media seems more cost-effective than GiveWell. They look especially promising from a cluster thinking approach.
If you agree you can donate.
I share your concern, but I think it would be more than offset by the competition with EAGx. My understanding from the dashboard, and from personal experience, is that EAGx has roughly the same value as EAG for first-timers (and many/most non first-timers) while being ~3x more cost-efficient. (If you went to both, would you rather pay $2k to go to an EAG or $600 to go to an EAGx as a first timer?)
I think this is good also because EAGx are mostly organized by different community members and national groups staff, so different groups could try different strategies and we could see what works best (e.g. minimize printing, have the many bored idle volunteers record talks with their phones instead of paying thousands per recorded talk, ...)
It might also help move EA towards being more of a do-ocracy and less of an "EV-ocracy".
Another great advantage imho is that people and community builders would be more mindful of other opportunities, or even try to start alternatives to EAGs, that have been crowded out by the current subsidies.
Do you have approximate statistics on the percentage distribution of paths you most commonly recommend during your 1-1 calls? In particular AI Safety related vs anything else, and in AI Safety working at top labs vs policy vs theoretical research. For example: "we recommend 1% of people in our calls to consider work in something climate-related, 50% consider work in AI Safety at OpenAI/other top labs, 50% to consider work in AI-policy, 20% to consider work in biosecurity, 30% in EA meta, 5% in earning to give, ..."
I ask because I heard the meme that "80,000hours calls are not worth the time, they just tell everyone to go into AI safety". I think it's not true, but I would like to have some data to refute it.