This post comments on concerns people have about the FTX Foundation and the Future Fund, and our contribution to free-spending EA worries.
I think there are a lot of important and reasonable concerns about spending money too quickly and without adequate oversight. We're making decisions that directly affect how many people spend a majority of their working hours, those decisions are hard to make, and there can be community-wide consequences to making them badly. It's also possible that some of our grantees are spending more than is optimal, and our funding is contributing to that. (If there are particular FTX Foundation grants that you think were a significant mistake, we would love feedback about those grants! You can share the feedback (anonymously if you prefer) via this form.)
Our processes are of course imperfect and we'll need to continue to improve over time. Below, I explain some more about our processes and how we're managing downside risk.
Some people seem to think that our procedure for approving grants is roughly "YOLO #sendit." This impression isn’t accurate. In reality, before a typical grant goes out it is:
- Recommended by a staff member or regrantor,
- Screened for red flags by a staff member, and then when needed reviewed (in specialized Slack channels created for this purpose) to address legal risks, public communications risks, interference with the work of other EA grantmakers, community health risks, and other potential harms, (we usually find a way to make good grants, but this process often improves them and reduces their risks)
- (If relevant) Reviewed by technical expert(s),
- Endorsed by another staff member, and
- Independently reviewed for final signoff
(For regrantors, the process is primarily focused on avoiding downsides, or giving optional suggestions on how to improve the grants.) Often, this process can move swiftly because we can quickly tell that some idea has significant upside and minimal downside, or just isn't a fit for our interests. For more complex decisions that require more input and discussion, it can take as long as it needs to.
In addition, I heard some people express confusion about how we can hit our funding targets with such a small team. A big part of the answer is that we're relying on a large number of regrantors and over a dozen external advisors that we frequently consult. (We've gotten a lot of help from folks at Open Phil in particular, which we really appreciate!) For example, for our recent open funding call, every application was reviewed by two people. For most of the grants that we ultimately funded, we had the applications reviewed by two or more further domain experts.
Relatedly, I get the sense that people are particularly worried by community building expenditure, and tie that expenditure to the FTX Foundation. But we’ve not actually done much community-building funding, that which we have done is very recent, and we're not the primary funder of most of the issues that are discussed in this post (insofar as I can tell what grants are being discussed). (Note: although I do want to clarify that FTX Foundation is not actually the primary funder of this activity, I don't mean to take a stand on whether the spending is in fact excessive. I'm not totally sure which grants are being discussed, and it isn't clear to me that CEA in particular is overspending.)
Finally, I wanted to emphasize that we’re generally giving a lot of consideration to downside risks and community effects as part of our work. As one example, the core of the regranting program was designed in early January, and it was fully launched in early April. Much of the working time in the interval was talking with potential stakeholders and adjusting the program to mitigate downside risk while maintaining the value of the program. We introduced things like detailed guidance for regrantors, a beta period to test the program before full rollout, a randomization element to increase fairness and decrease expressive stakes, an adjusted compensation structure, and a screening system (as described above) so that every grant could be assessed for downside risks, effects on the community, and conflicts of interest. And we’re continuing to update this, too, in light of people’s concerns. For example, over time we've started to treat our community-building grants as more sensitive in response to feedback.
I think I can understand where the confusion is coming from. We haven't yet given a progress update, explained the above processes, or spent a lot of time answering community questions about what we're doing. In addition, our initial announcement emphasized ambition, fast grant decisions, regranting programs, and massively scalable projects. I do think it was important to encourage our community along those dimensions because I think we do need to up our action-orientation and level of ambition. But I think the combination has led to concern and (sometimes inaccurate) speculation about our work.
One takeaway for me is that we have been under-communicating. We're planning to publish a review of our work so far in the next month or so, and I think it'll be much easier to have a grounded conversation about our work at that point.
I wrote a comment about TripleByte's feedback process here; this blog post is great too. In our experience, the fear of lawsuits and PR disasters from giving feedback to rejected candidates was much overblown, even at a massive scale. (We gave every candidate feedback regardless of how well they performed on our interview.)
Something I didn't mention in my comment is that much of TripleByte's feedback email was composed of prewritten text blocks carefully optimized to be helpful and non-offensive. While interviewing a candidate, I would check boxes for things like "this candidate used their debugger poorly", and then their feedback email would automatically include a prewritten spiel with links on how to use a debugger well (or whatever). I think this model could make a lot of sense for the fund:
It makes giving feedback way more scalable. There's a one-time setup cost of prewriting some text blocks, and probably a minor ongoing cost of gradually improving your blocks over time, but the marginal cost of giving a candidate feedback is just 30 seconds of checking some boxes. (IIRC our approach was to tell candidates "here are some things we think it might be helpful for you to read" and then when in doubt, err on the side of checking more boxes. For funding, I'd probably take it a step further, and rank or score the text blocks according to their importance to your decision. At TripleByte, we would score the candidate on different facets of their interview performance and send them their scores -- if you're already scoring applications according to different facets, this could be a cheap way to provide feedback.)
Minimize lawsuit risk. It's not that costly to have a lawyer vet a few pages of prewritten text that will get reused over and over. (We didn't have a lawyer look over our feedback emails, and it turned out fine, so this is a conservative recommendation.)
Minimize PR risk. Someone who posts their email to Twitter can expect bored replies like "yeah, they wrote the exact same thing in my email." (Again, PR risk didn't seem to be an issue in practice despite giving lots of freeform feedback along with the prewritten blocks, so this seems like a conservative approach to me.)
If I were you, I think I'd experiment with hiring one of the writers of the TripleByte feedback emails as a contractor or consultant. Happy to make an intro.
A few final thoughts:
Without feedback, a rejectee is likely to come up with their own theory of why they were rejected. You have no way to observe this theory or vet its quality. So I think it's a mistake to hold yourself to a high bar. You just have to beat the rejectee's theory. (BTW, most of the EA rejectee theories I've heard have been very cynical.)
You might look into liability insurance if you don't have it already; it probably makes sense to get it for other reasons anyway. I'd be curious how the cost of insurance changes depending on the feedback you're giving.