This post comments on concerns people have about the FTX Foundation and the Future Fund, and our contribution to free-spending EA worries.

I think there are a lot of important and reasonable concerns about spending money too quickly and without adequate oversight. We're making decisions that directly affect how many people spend a majority of their working hours, those decisions are hard to make, and there can be community-wide consequences to making them badly. It's also possible that some of our grantees are spending more than is optimal, and our funding is contributing to that. (If there are particular FTX Foundation grants that you think were a significant mistake, we would love feedback about those grants! You can share the feedback (anonymously if you prefer) via this form.)

Our processes are of course imperfect and we'll need to continue to improve over time. Below, I explain some more about our processes and how we're managing downside risk.

Some people seem to think that our procedure for approving grants is roughly "YOLO #sendit." This impression isn’t accurate. In reality, before a typical grant goes out it is:

  • Recommended by a staff member or regrantor,
  • Screened for red flags by a staff member, and then when needed reviewed (in specialized Slack channels created for this purpose) to address legal risks, public communications risks, interference with the work of other EA grantmakers, community health risks, and other potential harms, (we usually find a way to make good grants, but this process often improves them and reduces their risks)
  • (If relevant) Reviewed by technical expert(s),
  • Endorsed by another staff member, and
  • Independently reviewed for final signoff

(For regrantors, the process is primarily focused on avoiding downsides, or giving optional suggestions on how to improve the grants.) Often, this process can move swiftly because we can quickly tell that some idea has significant upside and minimal downside, or just isn't a fit for our interests. For more complex decisions that require more input and discussion, it can take as long as it needs to.

In addition, I heard some people express confusion about how we can hit our funding targets with such a small team. A big part of the answer is that we're relying on a large number of regrantors and over a dozen external advisors that we frequently consult. (We've gotten a lot of help from folks at Open Phil in particular, which we really appreciate!) For example, for our recent open funding call, every application was reviewed by two people. For most of the grants that we ultimately funded, we had the applications reviewed by two or more further domain experts. 

Relatedly, I get the sense that people are particularly worried by community building expenditure, and tie that expenditure to the FTX Foundation. But we’ve not actually done much community-building funding, that which we have done is very recent, and we're not the primary funder of most of the issues that are discussed in this post (insofar as I can tell what grants are being discussed). (Note: although I do want to clarify that FTX Foundation is not actually the primary funder of this activity, I don't mean to take a stand on whether the spending is in fact excessive. I'm not totally sure which grants are being discussed, and it isn't clear to me that CEA in particular is overspending.)

Finally, I wanted to emphasize that we’re generally giving a lot of consideration to downside risks and community effects as part of our work. As one example, the core of the regranting program was designed in early January, and it was fully launched in early April. Much of the working time in the interval was talking with potential stakeholders and adjusting the program to mitigate downside risk while maintaining the value of the program. We introduced things like detailed guidance for regrantors, a beta period to test the program before full rollout, a randomization element to increase fairness and decrease expressive stakes, an adjusted compensation structure, and a screening system (as described above) so that every grant could be assessed for downside risks, effects on the community, and conflicts of interest. And we’re continuing to update this, too, in light of people’s concerns. For example, over time we've started to treat our community-building grants as more sensitive in response to feedback.

I think I can understand where the confusion is coming from. We haven't yet given a progress update, explained the above processes, or spent a lot of time answering community questions about what we're doing. In addition, our initial announcement emphasized ambition, fast grant decisions, regranting programs, and massively scalable projects. I do think it was important to encourage our community along those dimensions because I think we do need to up our action-orientation and level of ambition. But I think the combination has led to concern and (sometimes inaccurate) speculation about our work.

One takeaway for me is that we have been under-communicating. We're planning to publish a review of our work so far in the next month or so, and I think it'll be much easier to have a grounded conversation about our work at that point.

Comments62
Sorted by Click to highlight new comments since:

FWIW:

1) agree with everything Nick said
2) I am really proud of what the team has done on net, although obviously nothing's perfect!
3) We really do love feedback!  If you have some on a specific grant we made you can submit here, or feel free to separately ping me/Nick/etc. :)


 

Do you think it was a mistake to put "FTX" in the "FTX Future Fund" so prominently? My thinking is that you likely want the goodness of EA and philanthropy to make people feel more positively about FTX, which seems fine to me, but in doing so you also run a risk of if FTX has any big scandal or other issue it could cause blowback on EA, whether merited or not.

I understand the Future Fund has tried to distance itself from effective altruism somewhat, though I'm skeptical this has worked in practice.

To be clear, I do like FTX personally, am very grateful for what the FTX Future Fund does, and could see reasons why putting FTX in the name is also a positive.

Note: On 2022-09-22 (prior to the "FTX debacle"), the parent comment had 21 karma points. Out of 16 top-level comments on this page, it appeared as the 8th comment (karma-wise and position-wise).

This was a very valid concern after all Peter :(

so sad it went unanswered

Note that it may be hard to give criticism (even if anonymous) about FTX's grantmaking because a lot of FTX's grantmaking is (currently) not disclosed. This is definitely understandable and likely avoids certain important downsides, but it also does amplify other downsides (e.g., public misunderstanding of FTX's goals and outputs) - I'm not sure how to navigate that trade-off, but it is important to acknowledge that it exists!

Totally agreed!

although, to be frank, it does make me a bit confused where some of the consternation about specific, unspecified grants has come from...

This seems slightly cryptic. Have you considered following the style and norms of other comments on the forum?

although, to be frank, it does make me a bit confused where some of the consternation about specific, unspecified grants has come from...

If your comment is about public sentiment around FTX grant decisions, there doesn't seem to be public knowledge of grants actually made. So it doesn't seem like there could be informed public opinion of the actual results. 

(If you are signaling/referencing some private discussion, this point doesn’t apply.)

Weak downvote because "Have you considered following the style and norms of other comments on the forum?" is needlessly rude

Yes, this is fair. The current vote score seems a little harsh though.

Anyways, I just got off a call with a collaborator who was also very excited about my comment—something about “billionaire” and “great doom”.

Yes, strong funding for x-risk is important, but in my opinion, I think there could be greater focus on high quality work more broadly.

oh wow, when I made the comment we were at -1 and +2 respectively, I agree this was a bigger reaction than I was expecting lol

Completely agree! Although I imagine that the situation will change soon due to 1) last funding decisions being finalized 2) funded projects coming out of stealth mode 3) more rejected applicants posting their applications publicly (when there are few downsides to doing so) 4) the Future Fund publishes a progress report in the next months.

So I expect the non-disclosure issue to be significantly reduced in the next few months.

Just a quick thank you to the FTX team for working so hard at such a difficult task!

Thanks for writing this up, Nick. It seems like a pretty good first step in communicating about what I imagine is a hugely complex project to deploy that much funding in a responsible manner. Something for FTX to consider within the context of community health and the responsibilities that you can choose to acknowledge as a major funding player: 

– How could a grant making process have significant effects on community health? What responsibilities would be virtuous for a major funding player to acknowledge and address? – 

I've picked up on lots of (concerning) widespread psychological fallout from people, especially project leaders, struggling to make sense of decision-making surrounding all this money pouring into EA (primarily from FTX). I wouldn't want to dichotomize this discussion by weighing it against the good that can be done with the increased funding, but there's value in offering constructive thoughts on how things could be done better.

What seems to have happened at FTX is some mixture of deputizing several individuals as funders + an application process (from what I've been hearing) that offers zero feedback. For those involved over there, is this roughly correct?

If indeed there are no other plans to handle the fundamentals of grantmaking beyond deployment of funds, fundamentals that I believe dramatically affect community health, unless I someone can persuade me otherwise, I'd predict a lot more disoriented and short-circuited (key) EAs, especially because many people on this community orient themselves in the world of legible and explicit.

In particular, people are having trouble getting a sense of how merit is supposed to work in this space. One of the core things I try to get them to consider, which is more pronounced perhaps now more than ever, is that merit is only one of many currencies upon which your social standing and evaluation of your project rests. This is hard for people to look at.

I hope FTX plans to take more responsibility for community health by following up with investment in legible M&E and application feedback. Echoing what I said a month ago about funders in general: https://bit.ly/3N1q3To

"Funders could do more to prioritize fostering relationships – greater familiarity with project leaders reduces inefficiencies of all sorts, including performative and preparation overhead, miscommunication, missed opportunities, etc.

In my opinion, this should also apply to unsuccessful projects. A common theme that I’ve seen from funders, partly due to bandwidth issues though not entirely, is aversion to giving constructive feedback to unsuccessful projects that nonetheless endure within the community. Given my firsthand experience with many clients who are fairly averse to interpersonal conflict, it wouldn’t surprise me if aversion to conflict + public relations considerations + legal issues (and other things) precluded funders from giving constructive feedback to failed applications. Funders would likely need to hold the belief that this feedback would meaningfully improve these projects prospects, and therefore the community overall, in order to put in the requisite effort to get through these blocks to this type of action. They’d also likely need to feel reassured that the feedback wouldn’t be excessively damaging reputationally (for both themselves and others), destabilize the community, or the integrity of community norms.

...

EA leaders are often at least partially in the dark regarding expectations from funders. This could be the case for many reasons, but a common reasons among leaders included the following:

• Reputational fears – Reticence to reach out due to some (un)justifiable fear of reputational harm

• Value system clash/lack of familiarity – not wanting to waste the time of funders, usually due to lack of familiarity and fears of how they would be received, but also sometimes a principled decision about not wanting to bother important decision-makers

• Not having considered reaching out to funders regarding expectations at a meaningful enough grain of detail

• (Likely not always misplaced) concerns about arbitrariness of the evaluation process

• Preparation overhead – not being ‘ready’ in various ways. In some cases, My outside view of the situation led me to believe that quite a bit of preparational overhead and perfunctory correspondence could be avoided if funders made it clearer that they care less about certain aspects of performative presentation "

Thanks for sharing your thoughts and concerns, Tee. I'd like to comment on application feedback in particular. It's true that we are not providing feedback on the vast majority of applications, and I can see how it would be frustrating and confusing to be rejected without understanding the reasons, especially when funders have such large resources at their disposal.

We decided not to give feedback on applications because we didn't see how to do it well and stay focused on our current commitments and priorities. We think it would require a large time investment to give feedback to everyone who wanted it on the 1700 applications we received, and we wanted to keep our focus on making our process timely, staying on top of our regranting program, dealing with other outstanding grants outside of these programs, hiring, getting started on reviewing our progress to date, and moving on to future priorities. I don't want to say it's impossible to find a process to give high-quality feedback at scale that we could do at acceptable time costs right now, but I do want to say that it would not be easy and would require an innovative approach. I hope that helps explain why we chose to prioritize as we did.

Agree with this--it's impossible to give constructive feedback on thousands of applications.  The decision is between not giving grants, or accepting that most grant applications won't get much feedback from us.  We chose the latter.

I'd like to challenge this. There are simultaneous claims that:

  1. it's impossible to give constructive feedback on thousands of applications
  2. It is possible to effectively (in an expected value sense) allocate $100m - $1b a year using this process which evaluates thousands of applications from a broad range of applicants related to a broad spectrum of ideas over just a two week period

I don't think both can be true in the long run. Like others in the comments suggested both may be a question of further investment in and improvement of the process. There is a lot of room for improvement: any feedback is better than no feedback, it doesn't have to be super constructive -just knowing if anyone even spent more than a minute looking at your application is useful info that applicants currently don't have.

Wanting to be constructive: would there be arguments against hiring an extra person whose job is to observe the decision making process (I assume there is a kind of internal log of decisions/opinions), and formulate non-zero feedback on applications?

It would be very surprising if there weren't an opportunity cost to providing feedback. Those might include:

  1. Senior management time to oversee the project, bottlenecking other plans
  2. PR firefighting and morale counselling when 1 in ~100 people get angry at what you say and cause you grief (this will absolutely happen)
  3. Any hires capable of thinking up and communicating helpful feedback (this is difficult!) could otherwise use that time to read and make decisions on more grant proposals in more areas — or just improve the decision-making among the same pool of applicants.

That there's an opportunity cost doesn't show it's not worth it but my guess is right now it would be huge mistake for Future Fund to provide substantial feedback except in rare cases.

That could change in future if their other streams of successful applicants dry up and improving the projects of people who were previously rejected becomes the best way to find new things they want to fund.

an opportunity cost to providing feedback

huge mistake for Future Fund to provide substantial feedback except in rare cases.

 

Yep, I'd imagine what makes sense is between 'highly involved and coordinated attempt to provide feedback at scale' and 'zero'. I think it's tempting to look away from how harmful 'zero' can be at scale

> That could change in future if their other streams of successful applicants dry up and improving the projects of people who were previously rejected becomes the best way to find new things they want to fund.


Agreed – this seems like a way to pick up easy wins and should be a good go-to for grant makers to circle back. However, banking on this as handling the concerns that were raised doesn't account for all the things that come with unqualified rejection and people deciding to do other things, leave EA, incur critical stakeholder instability etc. as a result. 

In other words, for the consequentialist-driven among us, I don't think that community health is a nice-to-have if we're serious about having a community of highly effective people working urgently on hard/complex things

"However, banking on this as handling the concerns that were raised doesn't account for all the things that come with unqualified rejection and people deciding to do other things, leave EA, incur critical stakeholder instability etc. as a result. "

I mean I think people are radically underestimating the opportunity cost of doing feedback properly at the moment. If I'm right then getting feedback might reduce people's chances of getting funded by say, 30%, or 50%, because the throughput for grants will be much reduced.

I would probably rather have a 20% chance of getting funding for my project without feedback than a 10% chance with feedback, though people's preferences may vary.

(Alternatively all the time spent explaining and writing and corresponding will mean worse projects get funded as there's not much time left to actually think through which projects are most impactful.)

Rob, I think you're consistently arguing against a point few people are making. You talk about ongoing correspondence with projects, or writing (potentially paragraphs of) feedback. Several people in this thread have suggested that pre-written categories of feedback would be a huge improvement from the status quo, and I can't see anything you've said that actually argues against that.

Also, as someone who semi-regularly gives feedback to 80+ people, I've never found it to make my thinking worse, but I've sometimes found it makes my thinking better.

I'm not saying there's no cost to feedback. Of course there's a cost! But these exaggerations are really frustrating to read, because I actually do this kind of work and the cost of what I'm proposing is a lot lower than you keep suggesting.

If it's just a form where the main reason for rejection is chosen from a list then that's probably fine/good.

I've seen people try to do written feedback before and find it a nightmare so I guess people's mileage varies a fair bit.

I've got a similar feeling to Khorton. Happy to have been pre-empted there. 

It could be helpful to consider what it is that legibility in the grant application process (for which post-application feedback is only one sort) is meant to achieve. Depending on the grant maker's aims, this can non-exhaustively include developing and nurturing talent, helping future applicants self-select, orienting projects on whether they are doing a good job, being a beacon and marketing instrument, clarifying and staking out an epistemic position, serving an orientation function for the community etc.

And depending on the basket of things the grant maker is trying to achieve, different pieces of legibility affect 'efficiency' in the process. For example, case studies and transparent reasoning about accepted and rejected projects, published evaluations, criteria for projects to consider before applying, hazard disclaimers, risk profile declarations, published work on the grant makers theory of change, etc. can give grant makers 'published' content to invoke during the post-application process that allows for the scaling of feedback. (e.g. our website states that we don't invest in projects that rapidly accelerate 'x'). There are other forms of pro-active communication and stratifying applicant journeys that would make things even more efficient. 

FTX did what they did, and there is definitely a strong case for why they did it that way. In moving forward , I'd be curious to see if they acknowledge and make adjustments in light of the fact that different forms and degrees of legibility can affect the community. 


 

Okay, upon review, that was a little bit too much of a rhetorical flourish at the end. Basically, I think there's something seriously important to consider here about how process can negatively affect community health and alignment, which I believe to be important for this community in achieving the plurality of ambitious goals we're shooting for. I believe FTX could definitely affect in a very positive way if they wanted to

Thanks for your comment! I wanted to try to clarify a few things regarding the two claims you see us as making.  I agree there are major benefits to providing feedback to applicants. But there are also significant costs, too, and I want to explain why it’s at least a non-obvious decision what the right choice is here.

On (1), I agree with Sam that it wouldn't be the right prioritization for our team right now to give detailed feedback to >1600 applications we rejected, and would cut into our total output for the year significantly. I think it could be done if need be, but it would be really hard and require an innovative approach. So I don’t think we should be doing this now, but I’m not saying that we won’t try to find ways to give more feedback in the future (see below).

On (2), although we want to effectively allocate at least $100M this year, we don't plan to do 100% of this using this particular process without growing our team. In our announcement post, we said we would try four different processes and see what works best. We could continue all, some, or none of them. We have given out considerably less than $100M via the open call (more in our progress update in a month or so); and, as I mentioned in another comment, for larger and/or more complex grants the investigation process often takes longer than two weeks.

On hiring someone to do this: I think there are good reasons for us not to hire an extra person whose job is to give feedback to everyone. Most importantly: there are lots of things we could hire for, I take early hiring decisions very seriously because they affect the culture and long-term trajectory of the organization, and we want to take those decisions slowly and deliberately. I also think it's important to maintain a certain quality bar for this kind of feedback, and this would likely require significant oversight from the existing team.

Will we provide feedback to rejected applicants in the future? Possibly, but I think this involves complex tradeoffs and isn't a no-brainer. I'll try to explain some of the reasons I see it this way, even at scale. A simple and unfortunate reason is that there are a lot of opportunities for angry rejected applicants - most of whom we do not know at all and aren't part of the effective altruism community - to play "gotcha" on Twitter (or with lawsuit threats) in response to badly worded feedback, and even if the chances of this happening are small for any single rejected application, the cumulative chances of this happening once are substantial if you're giving feedback to thousands of people. (I think this may be why even many public-spirited employers and major funders don't provide such feedback.) I could imagine a semi-standardized process that gave more feedback to people who wanted it and very nearly got funded. (A model that I heard TripleByte used sounds interesting to me.) We'll have to revisit these questions the next time we have an open call, and we'll take the conversation here into account—we really appreciate your feedback!

A model that I heard TripleByte used sounds interesting to me.

I wrote a comment about TripleByte's feedback process here; this blog post is great too. In our experience, the fear of lawsuits and PR disasters from giving feedback to rejected candidates was much overblown, even at a massive scale. (We gave every candidate feedback regardless of how well they performed on our interview.)

Something I didn't mention in my comment is that much of TripleByte's feedback email was composed of prewritten text blocks carefully optimized to be helpful and non-offensive. While interviewing a candidate, I would check boxes for things like "this candidate used their debugger poorly", and then their feedback email would automatically include a prewritten spiel with links on how to use a debugger well (or whatever). I think this model could make a lot of sense for the fund:

  • It makes giving feedback way more scalable. There's a one-time setup cost of prewriting some text blocks, and probably a minor ongoing cost of gradually improving your blocks over time, but the marginal cost of giving a candidate feedback is just 30 seconds of checking some boxes. (IIRC our approach was to tell candidates "here are some things we think it might be helpful for you to read" and then when in doubt, err on the side of checking more boxes. For funding, I'd probably take it a step further, and rank or score the text blocks according to their importance to your decision. At TripleByte, we would score the candidate on different facets of their interview performance and send them their scores -- if you're already scoring applications according to different facets, this could be a cheap way to provide feedback.)

  • Minimize lawsuit risk. It's not that costly to have a lawyer vet a few pages of prewritten text that will get reused over and over. (We didn't have a lawyer look over our feedback emails, and it turned out fine, so this is a conservative recommendation.)

  • Minimize PR risk. Someone who posts their email to Twitter can expect bored replies like "yeah, they wrote the exact same thing in my email." (Again, PR risk didn't seem to be an issue in practice despite giving lots of freeform feedback along with the prewritten blocks, so this seems like a conservative approach to me.)

If I were you, I think I'd experiment with hiring one of the writers of the TripleByte feedback emails as a contractor or consultant. Happy to make an intro.

A few final thoughts:

  • Without feedback, a rejectee is likely to come up with their own theory of why they were rejected. You have no way to observe this theory or vet its quality. So I think it's a mistake to hold yourself to a high bar. You just have to beat the rejectee's theory. (BTW, most of the EA rejectee theories I've heard have been very cynical.)

  • You might look into liability insurance if you don't have it already; it probably makes sense to get it for other reasons anyway. I'd be curious how the cost of insurance changes depending on the feedback you're giving.

 why it’s at least a non-obvious decision

Will we provide feedback to rejected applicants in the future? Possibly, but I think this involves complex tradeoffs and isn't a no-brainer

 So I don’t think we should be doing this now, but I’m not saying that we won’t try to find ways to give more feedback in the future (see below).


Very much appreciate the considerate engagement with this. Wanted to flag that my primary response to your initial comment can be found here

All this makes a lot of sense to me. I suspect some people got value out of the presentation of this reasoning. My goal here was to bring this set of consideration to yours and Sam's attention and upvote its importance, hopefully it's factored into what is definitely non-obvious and complex to decide moving forward. Great to see how thoughtful you all have been and thanks again! 

Thanks for the response, and thanks for being open to  improving your process, and I agree with many of your points about the importance of scaling teams cautiously.

I disagree that it's impossible to give constructive feedback on 1700 applications.

I could imagine FTX Future Fund having a couple of standardized responses, rather than just one. For example:

  1. Your application was rejected because based on the information provided it did not appear to be in scope for what we fund (link to the page that sets out what you fund)
  2. Your application appears to be in scope for what we fund. We weren't currently confident in the information provided about [theory of change / founding team / etc]. It might still be a good fit for another grantmaker. If you do decide to update that section, feel free to re-apply to a future round of funding.
  3. potentially a response for applications you think are an especially bad idea?

It seems many of the downsides of giving feedback would also apply to this.

I think lower resolution feedback introduces new issues too. For example, people might become aware of the schema and over-index on getting a "1. Reject" versus getting a "2. Revise and resubmit".

 

A major consideration is that I think some models of very strong projects and founders says that these people wouldn't be harmed by rejections. 

Further considerations related to this (that are a little sensitive) is that there are other ways of getting feedback, and that extremely impactful granting and funding is relationship based, not based on an instance of one proposal or project. This makes sense once you consider that grantees are EAs and should have very high knowledge of their domains in EA cause areas.

Thanks to Sam and Nick for getting to this. I think it's very cool that you two are taking the time to engage. In light of the high esteem that I regard both of you and the value of your time, I'll try to close the loop of this interaction by leaving you with one main idea. 

I was pointing at something different than what I think was addressed. To distill what I was saying: >> Were FTX to encounter a strong case for non-negligible harms/externalities to community health that could result from the grant making process, what would your response to that evidence be? <<

The response would likely depend on a hard-to-answer question about how FTX conceives of its responsibilities within the community given that it is now the largest funder by far. 

Personally, I was hoping for a response more along the lines of "Oh, we hadn't thought about it that way. Can you tell us more? How do you think we get more information about how this could be important?" 

I was grateful for Nick's thoughtful answer about what's happening over there. I think we all hear what you're saying about chosen priorities, complexity of project, and bandwidth issues. Also the future is hard to predict. I get all that and can feel how authentically you feel proud about how hard the team has been working and the great work that's been done already. I'm sure that's an amazing place to be. 

My question marks are around how you conceive of responsibility and choose to take responsibility moving forward in light of new information about the reality on the ground. Given the resources at your disposal, I'd be inclined to view your answer within the lens of prioritization of options, rather than simply making the best of constraints.

As the largest funder in the space by far, it's a choice to be open to discovering and uncovering risk and harms that they didn't account for previously. It's a choice to devote time and resources to investigate them. It's a choice to think through how context shifts and your relationship to responsibility evolves. It's a choice to not do all those things. 

A few things that seem hard waive away: 

1) 1600 -1650 (?) rejected applications from the largest and most exciting new funder with no feedback could be disruptive to community health

Live example: Established organization(s) got rejected and/or far less than asked for with no feedback. Stakeholders asked the project leaders "What does it mean that you got rejected/less than you asked for from FTX? What does that say about the impact potential of your project, quality of your project, fitness to lead it, etc." This can cause great instability. Did FTX foresee this? Probably not, for understandable reasons. Is this the effect that FTX wants to have? Probably not. Is it FTX's responsibility to address this? Uncertain. 

2) Opaque reasoning for where large amounts of money goes and why could be disruptive to community health

3) (less certain regarding your M&E plans) Little visibility on M&E given to applicants puts them in a place of not only not knowing what is good, but also how they know they're doing well. Also potentially disruptive

In regards to the approach moving forward for FTX, I wouldn't be surprised if more reflection among the staff yielded more than 'we're trying hard + it's complex + bandwidth issues so what do you want us to do?' My hope with this comment is to nudge internal discussions to be more expansive and reflective. Maybe you can let me know if that happened or not. Insofar as I delivered this in a way that hopefully didn't feel like an attack, if you feel including me in a discussion would be helpful, I'd love to be a part of it. 

And finally, I'm not sure where the 'we couldn't possibly give feedback on 1700 applications' response came from. I mentioned feedback, but there's innumerable ways to construct a feedback apparatus that isn't (what seemed to be assumed) the same level of care and complexity for each application. A quick example – 'stratified feedback' – FTX considers who the applicant is and gives varying levels of feedback depth depending on who they are. This could be important for established EA entities (as I mentioned above), where for various reasons, you think leaving them completely in the dark would be actively harmful for a subnetwork of stakeholders. My ideal version of this would also include promising individuals who you don't want to discourage, but for whatever reason their application wasn't successful. 

Thanks for taking the time. I hope this is received well. 



 

I thought this was very well put, and what I particularly like about it is that it puts the focus on quality of process and communication rather than vague concerns about the availability of more money per se. For my part, I think it's awesome that FTX is thinking so ambitiously and committing to get money out the door fast, which is a good corrective to EA standard operating procedure to date and an even better corrective to more mainstream funding processes. And I think the initial rollout was really quite good considering this was the first time y'all were doing this and the goals mentioned above.

With that said, I think Tee's comments about attention to in process are spot-on. Longer-term, I just don't see how this operation is effective in reaching its goals without a lot more investment in process and communication than has been made to date. The obvious comparison early on was to Fast Grants, but the huge difference there is that Fast Grants is a tiny drop in the bucket compared to the $40+ billion a year provided by the NIH, whereas FTX Foundation is well on its way to becoming the largest funder in EA. It was much more okay for Fast Grants to operate loosely because they were intervening on the margins of a very established system; FTX by contrast is very much establishing that system in real time.

I really don't think the answer here is to spend less or move more slowly. FTX has the resources and the smarts to build one of the very best grantmaking operations in the world, with wide-ranging and diverse sourcing; efficient due diligence focused like a laser on expected value; professional, open, timely and friendly communications with all stakeholders; targeted and creative post-hoc evaluation of key grants; and an elite knowledge management team to convert internal and external insights to decisions about strategic direction. This is totally in your reach! You have all the money and all the access to talent you need to make that happen. You just need to commit to the same level of ambition around the process as you have around the outcomes.

Also not trying to lay this all at FTX's doorstep. Hoping that raising this will fold into some of the discussions about community effects behind closed doors over there

My understanding is that people were mostly speculating on the EAF about the rejection rate and distribution of $ per grantee. What might have caused the propagation of "free-spending" EA stories:

  • the selection bias at EAG(X) conferences where there was a high % of  grantees.
  • the fact that FTX did not (afaik) release their rejection rate publicly
  • other grants made by other orgs happening concurrently (eg. CEA)

I found this sentence in Will's recent post "For example, Future Fund is trying to scale up its giving rapidly, but in the recent open call it rejected over 95% of applications" useful to shed light on the rejection rate situation.

I'm surprised that this is helpful fwiw. My impression is that the denominator of who applies to funding varies a lot across funding agencies, and it's pretty easy to (sometimes artificially) inflate or deflate the rejection rate from e.g. improper advertising/marketing to less suitable audiences, or insufficient advertising to marginal audiences.

Concretely, Walmart DC allegedly had a rejection rate of 97.4% in 2014, but overall we should not expect Walmart to be substantially more selective than Future Fund. 

Since your take-away is about undercommunication, please consider the tremendous value you could create by revising the "no feedback on rejected proposals" approach.

Rational case: You clearly create a lot of useful insight on projects in the review process you described here, and are in a superb position to guide applicants to value creation. You may identify weaknesses, red flags, strengths, alternative opportunities which the applicant might not realise. With a relatively small investment on your side you could share constructive feedback with rejected people, in turn creating a lot of downstream value at a low actual cost. A case can be made it would be rational to hire an additional full-time person (doesn't have to be an EA superstar) whose only job is to extract constructive feedback from the insights generated throughout the process.

Human, community-building case: You did say no feedback will be given, so one doesn't expect one. Even so,  when one receives a response and finds that it really contains nothing they can use to improve or simply disagree with, it does very strongly, and unnecessarily, contribute to the feeling of resentment mentioned in Will MacAskill's recent post: https://forum.effectivealtruism.org/posts/cfdnJ3sDbCSkShiSZ/ea-and-the-current-funding-situation

I appreciate these clarifications! Thanks, Nick! 

Soliciting feedback on mistakes seems like a good idea. 

I would also be excited to see a progress update if that isn't super costly to produce. Though I might be more happy with granters prioritizing funding good projects over telling everybody what they're doing than the average EA forum reader.

Makes sense! We are aiming to post a progress update in the next month or so.

Do you (FTX grantmakers) do or reference a BOTEC for each grant? Would you publish BOTECs you make or comments on the BOTECs you reference?

Without this, it seems like EAs would often need to guess and reconstruct your reasoning or make their own models in order to critique a grant, which is much more costly for individuals to do, is much less likely to happen at all, and risks strawmanning or low quality critiques. I think this also gets at the heart of two concerns with "free-spending" EA: we don't know what impact the EA community is buying with some of our spending, and we don't have clear arguments to point to to justify particular possibly suspicious expenses to others.

We tend to do BOTECs when we have internal disagreement about whether to move forward with a large grant, or when we have internal disagreement about whether to fund in a given area. But this is only how we make a minority of decisions.

There are certain standard numbers I think about in the background of many applications, e.g. how large I think different classes of existential risks are and modifiers for how tractable I think they are. My views are similar to Toby Ord's table of risks in The Precipice. We don't have standardized and carefully explained estimates for these numbers. We have thought about publishing some of these numbers and running prize competitions for analysis that updates our thinking, and that's something we may do in the future.

Considerations about how quickly it seems reasonable to scale a grantee's budget, whether I think the grantee is focused on a key problem, and how concrete and promising the plans are tend to loom large in these decisions.

When I say that I'm looking for feedback about grants that were a significant mistake, I'm primarily interested in grants that caused a problem that someone could experience or notice without doing a fancy calculation. I think this is feedback that a larger range of people can provide, and that we are especially likely to miss our own as funders.

Do you have standard numbers for net x-risk reduction (share or absolute) for classes of interventions you fund, too?

I did a lot of structured BOTECs for a different grant-making organization, but decided against sharing them with applicants in the feedback. The main problems were that one of the key inputs was a 'how competent are the applicants at executing on this', which felt awkward to share if someone got a very low number, and that the overall scores were approximately log-normally distributed, so almost everyone would have ended up looking pretty bad after normalization. 

I think that part of the model could be left out (left as a variable, or factored out of the BOTEC if possible), or only published for successful applicants.

Thanks for writing this – really appreciate it 😀 

Thanks Nick, interesting thoughts, great to see this discussion, and appreciated. Is there a timeline for when the initial (21 March deadline) applications will all be decided? As you say, it takes as long as it takes, but has some implications for prioritising tasks (eg  deciding whether to commit to less impactful, less-scalable work being offered, and the opportunity costs of this). Is there a list of successful applications? 

About 99% of applicants have received a decision at this point. The remaining 1% have received updates on when they should expect to hear from us next. Some of these require back-and-forth with the applicant and we can't unilaterally conclude the process with all the info we need. And in some of these cases the ball is currently in our court.

We will be reporting on the open call more systematically in our progress update which we publish in a month or so.

What I've heard from friends is that everyone's heard back now (either a decision, or an email update saying why a decision might take longer in their case). If you haven't heard anything I'd definitely recommend emailing the fund to follow up. I've known a couple of people who have needed to do this.

Please DM me if you submitted an application to our open call through our "Apply for Funding" form, but still haven't heard back from us (or are experiencing some other problem). Also, please note that if you filled out the "Expression of Interest" form, "Recommend a Prize" form, or "Recommend a Grant/Investment" form, we will get in touch with you only if we want to further explore your idea. 

As an applicant who was among those that were rejected, there are a couple of points that I wanted to add:

  1. The fundamentals of this grant program are awesome - I can't stress this enough: a grant program  of this size that is low red tape, rewards ambition, tolerates failure, and gives a quick decision is amazing. As you're processing critical feedback, please don't lose sight of the fact that the fundamentals of this program are awesome.
  2. One of biggest impacts of this program might be how it influences other grant makers - I totally understand that you want to have a large impact with the money you're giving out, but I think your biggest positive impact could end up coming from inspiring other large grant makers. So many grant programs put up huge barriers to the exact types of people that are most likely to have a large impact. In the same way you were inspired by Fast Grants, if this program inspires others to streamline their application processes that will be a huge win.
  3. This was your first attempt at this - Your team is moving quickly with bold experiments. Of course there will be lessons learned and course corrections, but your ambition combined with your humble posture and request for feedback is commendable.
  4.  Fairness and prudence are important, but so is emotional toll on the evaluators  - It would be a major disappointment if your processes, while fair and thorough, turned out to be stressful and miserable for those evaluating the applications. In order for these calls to be sustainable, it's important that those doing the work actually enjoy it instead of dreading it. It would be a shame if this wasn't repeated because of how stressful it was for your team. This program is important! And it's important to consider the human side of the process. In an ideal world this would be a fun process for both the applicants and the evaluators rather than a stressful one.

This was a really important experiment, and I hope it was rewarding for your team.

Thanks for this post. The additional details about FTX’s process, especially with regards to mitigating downside risks, is extremely reassuring to me.

Thanks, glad to hear it!

Small thing: the form has a typo you might want to correct:
 

My two cents about why people may be concerned about the decision-making process without having concrete details:
For instance, the initially advertised decision timeline of 2 weeks. While I appreciate the fast pace and the benefits that come with it, a complex system of review and decision-making is almost impossible to achieve at that timeline, especially given the interest in the program. 

Moreover, that deadline was not met for all projects which is both good because clearly more time was needed and also bad because applicants' expectations were not met and they needed to potentially change their plans for the projects because of the dealy. Additionally, it signals FTX's poor understanding of either its capacity or the complexity of the grant-making process. Lack of either doesn't inspire a lot of confidence.

Thanks for the thoughts, Irena! It's true that there are some proposals that did not receive decisions in 14 days and perhaps we should have communicated more carefully.

That said, I think if you look at the text on the website and compare it with what's happening, it actually matches pretty closely.

We wrote:

"We aim to arrive at decisions on most proposals within 14 days (though in more complex cases, we might need more time).

  • If your grant request is under $1 million, we understand it, we like it, and we don’t see potential for major downsides, it’ll probably get approved within a week. 
  • Sometimes, we won’t see an easy path to finding a strong fit, and you’ll get a quick negative decision. 
  • Sometimes we’re just missing a little bit of information, and we’ll need to have a call with you to see if there’s a fit. 
  • Larger grants and grants that affect whole communities require more attention, and will have a customized process. 

We try to avoid processes that take months and leave grantees unclear on when they’re going to reach a decision."

It's true that we made decisions on the vast majority of proposals on roughly this timeline, and then some of the more complicated / expensive proposals took more time (and got indications from us about when they were supposed to hear back next).

We try to avoid processes that take months and leave grantees unclear on when they’re going to reach a decision."

It's true that we made decisions on the vast majority of proposals on roughly this timeline, and then some of the more complicated / expensive proposals took more time (and got indications from us about when they were supposed to hear back next).


The indication I got said that FTX would reach out "within two weeks", which meant by April 20. I haven't heard back since, though. I reached out eight days ago to ensure that my application or relevant e-mails haven't been lost, but I haven't received an answer. :(

(I get that this is probably not on purpose, and that grant decisions take as long as they need to, but if I see an explicit policy of "we are going to reach out even if we haven't made a decision yet" then I'm left wondering if something has broken down somewhere and about what to do. It seems a good choice to try to reach out myself... and comment under this thread to provide a data point.)

Hi, thanks for raising this - could you send me a DM so I can try to figure out what's going on? 

I think I have figured out the situation with OP (and have responded to them), but if anyone else is in a similar situation, please DM me!

I confirm that this resolved. Thanks for the e-mail response!

I've heard this story from two of my friends as well, both of who received answers a few weeks after they proactively reached out

Team,

If this has been already answered, please let me know.  I am interested in connecting to understand the next generation of requests.  As I continue to read about the current process and some of the comments, I am leading to believe that if there is a next round, we would like to have some input from a wholesale strategic approach.  We found out about the FTX Fund after it closed and when I went to meet others in the segment that we work on what they submitted, everyone had about the same thing.... meaning that FTX Fund could be a coordinating agency to bring multiple people thinking the same thing together.  I can only assume you are doing this now, but we would love to participate in a more standardized approach.  How do we connect with you to about this and discuss further?

[comment deleted]2
0
0
Curated and popular this week
Relevant opportunities