TLDR: If you're an EA-minded animal funder donating $200K/year or more, we'd love to connect with you about several exciting initiatives that AIM is launching over the next several months.
AIM (formerly Charity Entrepreneurship) has a history of incubating and supporting highly effective charities across various cause areas. We have also launched a variety of additional programs aiming at other impactful sectors, from philanthropy to research to local effective giving. We have noticed through engaging on these different levels of impact that animal welfare seems particularly impactful and particularly neglected, even amongst a crop of already impactful and neglected cause areas.We believe that there are several opportunities to meaningfully impact animal welfare through donor collaboration and programming. To that end, we’re launching a few exciting initiatives over the coming months.Specifically, we are excited about two projects that are launching soon:
- An animal-focused Foundation Program round, where we'll be supporting a cohort of ambitious founders as they develop their philanthropic strategy. This cohort begins April 15.
- An animal-focused funding circle, bringing funders together to strategically deploy capital to the most promising animal charities. This will likely launch mid-summer.
We believe these initiatives will offer ambitious funders unique opportunities for increased impact. If you're an EA-minded animal funder who donates $200K or more per year, please don’t hesitate to reach out.
Hi, I am Charity Entrepreneurship (CE, now AIM) Director of Research. I wanted to quickly respond to this point.
– –
Quality of our reports
I would like to push back a bit on Joey's response here. I agree that our research is quicker scrappier and goes into less depth than other orgs, but I am not convinced that our reports have more errors or worse reasoning that reports of other organisations (thinking of non-peer reviewed global health and animal welfare organisations like GiveWell, OpenPhil, Animal Charity Evaluators, Rethink Priorities, Founders Pledge).
I don’t have strong evidence for thinking this. Mostly I am going of the amount of errors that incubates find in the reports. In each cohort we have ~10 potential founders digging into ~4-5 reports for a few weeks. I estimate there is on average roughly 0.8 non-trivial non-major errors (i.e. something that would change a CEA by ~20%) and 0 major errors highlighted by the potential founders. This seems in the same order of magnitude to the number of errors GiveWell get on scrutiny (e.g. here).
And ultimately all our reports are tested in the real world by people putting the ideas in practice. If our reports do not line up to reality in any major way we expect to find out when founders do their own research or a charity pivots or shuts down, as MHI has done recently.
One caveat to this is that I am more confident about the reports on the ideas we do recommend than the other reports on non-recommended ideas which receive less oversight internally as they are less decision relevant for founders, and receive less scrutiny from incubates and being put into action.
I note also that in this entire critique and having skimmed over the threads here no-one appears to have pointed out any actual errors in any CE report. So I find it hard to update on anything written here. (The possible exception is me, in this post, pointing to the MHI case which does seem unfortunately to have shut down in part due to an error in the initial research.)
So I think our quality of research is comparable to other orgs, but my evidence for this is weak and I have not done a thorough benchmarking. I would be interested in ways to test this. It could be a good idea for CE to run a change our mind context like GiveWell in order to test the robustness of our research. Something for me to consider. It could also be useful (although I doubt worth the error) to have some external research evaluator review our work and benchmark us against other organisations.
[EDIT: To be clear talking here about quality in terms of number of mistakes/errors. Agree our research is often shorter and as such is more willing to take shortcuts to reach conclusions.]
– –
That said I do agree that we should make it very very clear in all our reports the context of who the report is written for and why and what the reader should take from the report. We do this in the introduction section to all our reports and I will review the introduction for future reports to make sure this is absolutely clear.