This post comments on concerns people have about the FTX Foundation and the Future Fund, and our contribution to free-spending EA worries.
I think there are a lot of important and reasonable concerns about spending money too quickly and without adequate oversight. We're making decisions that directly affect how many people spend a majority of their working hours, those decisions are hard to make, and there can be community-wide consequences to making them badly. It's also possible that some of our grantees are spending more than is optimal, and our funding is contributing to that. (If there are particular FTX Foundation grants that you think were a significant mistake, we would love feedback about those grants! You can share the feedback (anonymously if you prefer) via this form.)
Our processes are of course imperfect and we'll need to continue to improve over time. Below, I explain some more about our processes and how we're managing downside risk.
Some people seem to think that our procedure for approving grants is roughly "YOLO #sendit." This impression isn’t accurate. In reality, before a typical grant goes out it is:
- Recommended by a staff member or regrantor,
- Screened for red flags by a staff member, and then when needed reviewed (in specialized Slack channels created for this purpose) to address legal risks, public communications risks, interference with the work of other EA grantmakers, community health risks, and other potential harms, (we usually find a way to make good grants, but this process often improves them and reduces their risks)
- (If relevant) Reviewed by technical expert(s),
- Endorsed by another staff member, and
- Independently reviewed for final signoff
(For regrantors, the process is primarily focused on avoiding downsides, or giving optional suggestions on how to improve the grants.) Often, this process can move swiftly because we can quickly tell that some idea has significant upside and minimal downside, or just isn't a fit for our interests. For more complex decisions that require more input and discussion, it can take as long as it needs to.
In addition, I heard some people express confusion about how we can hit our funding targets with such a small team. A big part of the answer is that we're relying on a large number of regrantors and over a dozen external advisors that we frequently consult. (We've gotten a lot of help from folks at Open Phil in particular, which we really appreciate!) For example, for our recent open funding call, every application was reviewed by two people. For most of the grants that we ultimately funded, we had the applications reviewed by two or more further domain experts.
Relatedly, I get the sense that people are particularly worried by community building expenditure, and tie that expenditure to the FTX Foundation. But we’ve not actually done much community-building funding, that which we have done is very recent, and we're not the primary funder of most of the issues that are discussed in this post (insofar as I can tell what grants are being discussed). (Note: although I do want to clarify that FTX Foundation is not actually the primary funder of this activity, I don't mean to take a stand on whether the spending is in fact excessive. I'm not totally sure which grants are being discussed, and it isn't clear to me that CEA in particular is overspending.)
Finally, I wanted to emphasize that we’re generally giving a lot of consideration to downside risks and community effects as part of our work. As one example, the core of the regranting program was designed in early January, and it was fully launched in early April. Much of the working time in the interval was talking with potential stakeholders and adjusting the program to mitigate downside risk while maintaining the value of the program. We introduced things like detailed guidance for regrantors, a beta period to test the program before full rollout, a randomization element to increase fairness and decrease expressive stakes, an adjusted compensation structure, and a screening system (as described above) so that every grant could be assessed for downside risks, effects on the community, and conflicts of interest. And we’re continuing to update this, too, in light of people’s concerns. For example, over time we've started to treat our community-building grants as more sensitive in response to feedback.
I think I can understand where the confusion is coming from. We haven't yet given a progress update, explained the above processes, or spent a lot of time answering community questions about what we're doing. In addition, our initial announcement emphasized ambition, fast grant decisions, regranting programs, and massively scalable projects. I do think it was important to encourage our community along those dimensions because I think we do need to up our action-orientation and level of ambition. But I think the combination has led to concern and (sometimes inaccurate) speculation about our work.
One takeaway for me is that we have been under-communicating. We're planning to publish a review of our work so far in the next month or so, and I think it'll be much easier to have a grounded conversation about our work at that point.
Thanks for your comment! I wanted to try to clarify a few things regarding the two claims you see us as making. I agree there are major benefits to providing feedback to applicants. But there are also significant costs, too, and I want to explain why it’s at least a non-obvious decision what the right choice is here.
On (1), I agree with Sam that it wouldn't be the right prioritization for our team right now to give detailed feedback to >1600 applications we rejected, and would cut into our total output for the year significantly. I think it could be done if need be, but it would be really hard and require an innovative approach. So I don’t think we should be doing this now, but I’m not saying that we won’t try to find ways to give more feedback in the future (see below).
On (2), although we want to effectively allocate at least $100M this year, we don't plan to do 100% of this using this particular process without growing our team. In our announcement post, we said we would try four different processes and see what works best. We could continue all, some, or none of them. We have given out considerably less than $100M via the open call (more in our progress update in a month or so); and, as I mentioned in another comment, for larger and/or more complex grants the investigation process often takes longer than two weeks.
On hiring someone to do this: I think there are good reasons for us not to hire an extra person whose job is to give feedback to everyone. Most importantly: there are lots of things we could hire for, I take early hiring decisions very seriously because they affect the culture and long-term trajectory of the organization, and we want to take those decisions slowly and deliberately. I also think it's important to maintain a certain quality bar for this kind of feedback, and this would likely require significant oversight from the existing team.
Will we provide feedback to rejected applicants in the future? Possibly, but I think this involves complex tradeoffs and isn't a no-brainer. I'll try to explain some of the reasons I see it this way, even at scale. A simple and unfortunate reason is that there are a lot of opportunities for angry rejected applicants - most of whom we do not know at all and aren't part of the effective altruism community - to play "gotcha" on Twitter (or with lawsuit threats) in response to badly worded feedback, and even if the chances of this happening are small for any single rejected application, the cumulative chances of this happening once are substantial if you're giving feedback to thousands of people. (I think this may be why even many public-spirited employers and major funders don't provide such feedback.) I could imagine a semi-standardized process that gave more feedback to people who wanted it and very nearly got funded. (A model that I heard TripleByte used sounds interesting to me.) We'll have to revisit these questions the next time we have an open call, and we'll take the conversation here into account—we really appreciate your feedback!