We (Open Phil) are seeking applications from grantees affected by the recent collapse of the FTX Future Fund (FTXFF) who fall within our long-termist focus areas (biosecurity, AI risk, and building the long-termist EA community). If you fit the following description, please fill out this application form.

We’re open to applications from:

  • Grantees who never received some or all of their committed funds from FTXFF.
  • Grantees who received funds, but want to set them aside to return to creditors or depositors.
    • We think there could be a number of complex considerations here, and we don’t yet have a clear picture of how we’ll treat requests like these. We’d encourage people to apply if in doubt, but to avoid making assumptions about whether you’ll be funded (and about what our take will end up being on what the right thing to do is for your case). (Additionally, we’re unsure if there will be legal barriers to returning funds.) That said, we’ll do our best to respond to urgent requests quickly, so you have clarity as soon as possible.
  • Grantees whose funding was otherwise affected by recent events.[1]

Please note that this form does not guarantee funding. We intend to evaluate applications using the same standard as we would if they were coming through one of our other longtermist programs — we will evaluate whether they are a cost-effective way to positively influence the long-term future. As described in Holden’s post, we expect our cost-effectiveness “bar” to rise relative to what it has been in the past, so unfortunately we expect that some of the applications we receive (and possibly a sizeable fraction of them) will not be successful. That said, this is a sudden disruptive event and we plan to take into account the benefits of stability and continuity in our assessment.

We’ll prioritize getting back to applicants who indicate time sensitivity and whose work seems highly likely to fall above our updated bar. If we’re unsure whether an application is above our new bar, we’ll do our best to get back within the indicated deadline (or within 6 weeks, if the application isn’t time-sensitive), but we may take longer as we reevaluate where the bar should be.

We’re aware that others may want to help out financially. If you would like to identify yourself as a potential donor either to this effort or to a different one aimed at impacted FTXFF grantees, you can get in contact with us at inquiries@openphilanthropy.org. We sincerely appreciate anyone who wants to help, but due to logistical reasons we can only respond to emails from people who think there’s a serious chance they’d be willing to contribute over $250k. 

  1. ^

     We’ve left an option on the form to explain specific circumstances – we can imagine many ways that recent events could be disruptive. (For example, if FTXFF had committed to funding a grantee that planned to regrant some of those funds, anyone anticipating a regrant could be affected.)

Comments20


Sorted by Click to highlight new comments since:

Really excited to see this initiative!

Should grantees with significant runway apply for this, (ie, they lost out on money, but this mostly cut into future runway, and this won't really affect things for the next few months), or would you like to reserve this for grantees with urgent need?

Also, has OpenPhil considered guaranteeing some/all grantees for covering clawbacks? This seems like it might be reasonably cheap in expectation, but save a lot of people from stressful distractions and unnecessary conservatism (but I imagine also has a bunch of legal consequences and possibly significantly increases the risk of clawbacks!)

(I work at Open Phil assisting with this effort.)

  1. Any grantee who is affected by the collapse of FTXFF and whose work falls within our focus areas (biosecurity, AI risk, and community-building) should feel free to apply, even if they have significant runway.

  2. For various reasons, we don’t anticipate offering any kind of program like this, and are taking the approach laid out in the post instead. Edit: We’re still working out a number of the details, and as the comment below states, people who are worried about this should still apply.

On the second paragraph, I don't think it has been established that insuring clawback risk would be cheap in expectation. Also, in many cases, insurance increases the risk of a lawsuit or reduces the other side's willingness to settle. 

For example, suppose I think someone negligently breaks my leg and I am considering whether it is worthwhile to litigate. If I find out the person is an uninsured second-year philosophy grad student, I probably won't bother -- collecting any judgment will be very difficult, and a rational judgment debtor will just file for bankruptcy and get the debt wiped anyway. If the person is a middle-class tourist from the UK, I will probably find it worthwhile to sue if the damages are big enough and if I think there is a good enough chance I could collect on any judgment. Now, if I know that either of these people were insured, I am much more likely to sue, and I am not going to be willing to reduce a settlement demand based on doubts about collectability.

However, I think there is a good idea here. I would suggest there is a legal clawback risk and a "moral clawback risk" -- that the grantee will be (and/or feel) ethically obliged to return some or all money although not legally required. Replacing some or all of the FTX-aligned funds seems to weigh the combined legal + moral clawback risk at 100%. Although offering insurance has some downsides, it does seem less expensive than unconditionally replacing the funds. 

So there may be some conditions under which a funder would find it wortwhile to provide clawback insurance while it would not fund the project. 

That sounds relatively straightforward as it comes to legal clawback risk; it is interesting to consider how  "insurance" might work if the "insurer" were willing to provide some coverage for moral clawback risk. For those with certain beliefs about moral clawback, the grantee's moral-clawback obligation or non-obligation can already be determined, so an "insurance" paradigm makes no sense. But for some of us, the nature and extent of a moral-clawback obligation depends on information that is not yet known.

Would the "insurer" decide the extent to which the ultimately determined facts create an ethical obligation to return funds? Or perhaps the "insurer" would offer 100% coverage for legal clawback risk and would allow the "insured" to decide moral clawback subject to a coinsurance requirement. For example, the "insurer" might only match the amount the "insured" was willing to voluntarily return out of its own pocket.  That would address concerns that grantees may be too quick to find a moral-clawback obligation if the cost is coming entirely out of someone else's pocket, and reduce the cost of providing "insurance."

If a grant / grantee is doing work which aligns with Open Phil's work, but is more properly classified as global health or animal welfare, can they still apply here, should they apply in some other way, or is Open Phil not the correct vehicle?

(I work at Open Phil on Effective Altruism Community Building: Global Health and Wellbeing)

Our understanding is that only a small proportion of FTXFF’s grantees would be properly classified as global health or animal welfare. Among that subset, there are some grantees who we think might be a good fit for our current focus areas and strategies. We’ve reached out individually to grantees we know of who fit that description

That being said, it’s possible we’ve missed potential grantees, or work that might contribute across multiple cause areas. If you think that might apply to your project, you can apply through the same form.

This is wonderful news. Thank you very much for getting that up and running.

You may want to also consider the situation where an organisation doesn't want to pay employees with funds that could potentially be clawed back or which could be seen to be morally-tainted (depending on what information we find out).

(I work at Open Phil assisting with this effort.)

We think that people in this situation should apply. The language was intended to include this case, but it may not have been clear.

I strongly suspect there are legal reasons that covering future clawbacks, especially if they say so explicitly, is not going to be workable, or at least is significantly more complex / dangerous legally.

This sort of falls under the second category, "Grantees who received funds, but want to set them aside to return to creditors or depositors." At least that's how I read it, though the more I think about it the more this category is kind of confusing and your wording seems more direct.

I think it'd be preferable to explicitly list as a reason for applying something along the lines of "Grantees who received funds, but want to set them aside to protect themselves from potential clawbacks". 

Less importantly, it'd possibly be better to make it separate from "to return to creditors or depositors". 

How quickly should grantees impacted by recent events apply to this call? Is there a hard or soft deadline for these applications? I have to decide how much time I should invest in adapting, updating, and improving the previous application. I assume you want applicants to attach a proposal detailing the planned projects, the project's pathway to impact, and evidence of its chances to succeed.

The application form is actually really restrictive once you open it-- when I filled it out, it explicitly instructed not to write any new material and only attach old material that was sent to FTXFF, and only had a <20 word box and <150 word box for grant descriptions. Today when I open the form even those boxes have disappeared. I think it's meant to be a quite quick form, where they'll reach out for more details later.

Thank you so much for pointing that out, Vael! I had completely overlooked that information. That's really helpful to know.

Looking back five months later, can you say anything about whether this program ended up making grants, and if so how much/how many? Thanks!

I have some clarification questions about the form:

1. Does "total grant amount" refer to the amount we requested or the amount we were promised?

2. Does "amount that has been committed but not received yet" refer to a) the amount that the grantor promised but did not pay out or b) project-related financial obligations and expenditures of the grantee, such as the salaries of people working on the project, that would have been paid from the grant?

  1. This refers to the amount you were promised from FTXF.
  2. This refers to the amount that was promised, but hasn’t been paid out.

Thank you very much for answering my questions. :)

I recently filled out the Airtable form, but was surprised to see when I got my e-mail receipt that many of the answers I provided did not appear.

How would you suggest that I and others affected by this proceed? Thanks!

[edit: extraneous information removed]

(I work at Open Phil assisting with this effort.)

Thanks for pointing this out; it looks like there was a technical error which excluded these from the email receipt, which we've now fixed. The information was still received on our end, so you don't need to take any extra actions.

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal