It appears that FTX, whose principals support the FTX Foundation, is in serious trouble. We’ve been getting a lot of questions related to these events.
Edited to add (Nov. 13): based on continuing to follow coverage of this situation, I now think it’s very likely that FTX engaged in outrageous, unacceptable fraud. I am furious at the behavior of FTX leadership. I’m going to take some time to reflect on what this means for effective altruism and the effective altruist community. I’m not sure whether or when I will write up more detailed thoughts, so for now I will just point to a few statements by others whose general sentiments I resonate with:
- Twitter threads by Rob Wiblin, Dustin Moskovitz, and Will MacAskill
- Evan Hubinger’s excellent post, We must be very clear: fraud in the service of effective altruism is unacceptable
I’ve made an attempt to get some basic points out quickly that might be helpful to people, but the situation appears to be developing quickly and I have little understanding of what’s going on, so this post will necessarily be incomplete and nonauthoritative.
One thing I’d like to say up front (more on how this relates to FTX below) is that Open Philanthropy remains committed to our longtermist focus areas and still expects to spend billions of dollars on them over the coming decades. We will raise the bar for our giving, and we don’t know how many existing projects that will affect, but we still expect longtermist projects to grow in terms of their impact and output.
Are the funds directed by Open Philanthropy invested in or otherwise exposed to FTX or related entities?
The FTX Foundation has quickly become a major funder of many longtermist and effective altruist organizations. If it stops (or greatly reduces) funding them, how might that affect Open Philanthropy’s funding practices?
If the FTX Foundation stops (or greatly reduces) funding such people and organizations, then Open Philanthropy will have to consider a substantially larger set of funding opportunities than we were considering before.
In this case, we will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously.
Does Open Philanthropy also need to raise its bar in light of general market movements (particularly the fall in META stock) and other factors?
- Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume.) Dustin and Cari still expect to spend nearly all of their resources in their lifetimes on philanthropy that aims to accomplish as much good per dollar as possible.
- Additionally, the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar.
As stated above, we remain committed to our focus areas and still expect to spend billions of dollars on them over the coming decades.
So how much might Open Philanthropy raise its bar for longtermist grantmaking, and what does this mean for today’s potential grantees?
We don’t know yet — the news about FTX was sudden, and we’re working to figure things out.
It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few.
(We don’t have a crisp formalization of “the bar”; instead we have general guidance to grantmakers on what sorts of requests should be generously funded vs. carefully considered vs. rejected. We need to rethink and revise this guidance.)
Because of this, we are pausing most new longtermist funding commitments (that is, commitments within Potential Risks from Advanced Artificial Intelligence, Biosecurity & Pandemic Preparedness, and Effective Altruism Community Growth) until we gain more clarity, which we hope will be within a month or so.
This is a temporary pause as we try to reorient our thinking. There are many potential grantees we expect to ask to wait for a month or so, but are likely to fund in the next three months. It’s not an absolute pause: we will continue to do some longtermist grantmaking, mostly when it is time-sensitive and seems highly likely to end up above our bar (this is especially likely for relatively small grants). Our existing calls for applications will remain open by default; we just will hold off on evaluating incoming applications in most cases, while the pause is in effect.
We’ll also be honoring existing commitments and providing funding that’s needed to avoid costly disruptions to core grantees’ work.
Will Open Phil support FTX Foundation grantees who have financial needs related to these events?
Open Phil will consider grantees whose work falls in one of our focus areas, and evaluate them alongside other opportunities. As mentioned above, we are temporarily pausing most longtermist funding, but continuing to evaluate time-sensitive asks.
How does this impact Open Philanthropy’s Global Health and Wellbeing work?
Given FTX Foundation’s focus on existential risk and longtermism, the most direct impacts are on our longtermist work. We don’t anticipate any immediate changes to our Global Health and Wellbeing work as a result of the recent news.
What do you think of allegations that FTX engaged in fraud and/or other unethical behavior?
I don’t understand the situation very well (and I have no special insight into it – I’ve read the same tweets and news stories as everyone else), and it doesn’t seem that all the facts are in. I will be following the situation as it develops.
Edited to add (Nov. 13): I now think it’s very likely that FTX engaged in outrageous, unacceptable fraud, as now noted at the top of this piece.
Separate from the details of the FTX situation, do you think that fraud could be justified if it raises huge amounts of money for good causes?
I dislike “end justify the means”-type reasoning. The version of effective altruism I subscribe to is about being a good citizen, while ambitiously working toward a better world. As I wrote previously, I think effective altruism works best with a strong dose of pluralism and moderation.
I think this is a common approach to effective altruism, e.g. it is consistent with the effectivealtruism.org intro to effective altruism (where I got the language “being a good citizen, while ambitiously working toward a better world”) and the Centre for Effective Altruism’s Guiding Principles (see “Integrity”). (Also see this post by Eliezer Yudkowsky.)
Thank you for a good and swift response, and in particular, for stating so clearly that fraud cannot be justified on altruistic grounds.
I have only one quibble with the post: IMO you should probably increase your longtermist spending quite significantly over the next ~year or so, for the following reasons (which I'm sure you've already considered, but I'm stating them so others can also weigh in)
... (read more)
- IIRC Open Philanthropy has historically argued that a lack of high-quality, shovel-ready projects has been limiting the growth in your longtermist portfolio. This is not the case at the moment. There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started. Stepping in to help these projects bridge the gap until they can find new funding sources looks like an unusually cost-effective opportunity. It may also require somewhat less vetting on your end, which may matter more if you're unusually constrained by grantmaker capacity for a while
- Temporarily ramping up funding can also be justified by considering likely flow-through effects of acting as an "insurer o
I want to push back on this a tiny bit. Just because some projects got funding from FTX, that doesn't necessarily mean Open Phil should fund them. There's a few reasons for this:
... (read more)
- When FTX Future Fund was functioning, there was lots more money available in the ecosystem, hence (I think) the bar for receiving a longtermist grant was lower. This money is now gone, and lots of orgs who got FTX funding might not meet OP's bar / the new bar we should have given less resources. So basically I don't think it's sufficient to say 1) they have significant funding gaps, 2) they exist and 3) they've been vetted by people you trust. IMO you need to prove that they're also sufficiently high-quality, which might not be true as FTX was vetting them with a different bar in mind.
I agree with all three people:
Agree with many of the considerations above - the bar should probably rise somewhat after such a funding shortfall. One way to solve it in practice could be to sit down in the room with the old FTX FF team and ask "which XX% of your grants are you most enthusiastic about and why", and then (at least as an initial hypothesis; possibly requiring some further vetting) plan to fund that. The generalized point I'm trying to make is twofold: 1) that quite a bit of judgement already went into assessing these projects and it should be possible to use that to decide how many of them are above the bar, and 2) because all the other input factors (talent, project idea, vetting) are unchanged, and assuming a standard shape of the EA production function, the marginal returns to funding should now be unusually high.
And David is right that (at least under some reasonable models) if you can predict that your bar will fall in the future, you should probably lower it already. I'm not exactly sure what the requirements would be for the funding bar to have a Martingale property (e.g., does it require some version of risk neutrality, or specific assumptions about the shape of the impact dist... (read more)
Thank you for this timely and transparent post, and for all the additional work I'm sure your team is shouldering in response to this situation.
With Giving Tuesday and general end-of-year giving on the horizon, I think any indication from OPP of new anticipated funding gaps would be useful to the EA community as a whole. It would also be helpful to get a sense as soon as the information is available of what the overall cause area funding distribution in EA is likely to look like after this week.
Thanks for the post, I appreciate the clarity it brings.
Would it not make sense for Open Phil to shift some of its neartermist/global health funds to longtermist causes?
Although any neartermist:longtermist funds ratio is, in my opinion, fairly arbitrary, this ratio has increased significantly following the FTX event. Thus, seems to me that Open Phil should maybe consider acting to rebalance it.
(I'd be curious to hear a solid counterargument.)
Did Open Phil shift funds away from longtermist causes when FTX funds became available?
The FTX Future Fund launched in Feb 2022, and Open Phil were hiring for a program officer for their new Global Health and Wellbeing program in Feb 2022 (see here).
For context, all FTX funds go/went to longtermist causes; Open Phil currently has two grantmaking programs (see here): one in Longtermism and the other one being the Global Health and Wellbeing program that launched - I assume - around Feb '22.
So my guess, though I'm not certain, is that the launches of FTX Future Fund and Open Phil's Global Health and Wellbeing program were linked, and that Open Phil did increase its neartermist:longtermist funding ratio when FTX funds became available.
OPP was making grants in the Global Health and Wellbeing space (which includes animal welfare) long before this.
The data exist via their grants database  — it doesn't look to me like there was any shift away from longtermism that coincided with SBF/FTX entering the space (if anything, it looks like the opposite could be true in 2022).
Credit to Tyler Muale for data collection ↩︎
(It's interesting to note that, at present, my above comment is on -1 agreement karma after 50 votes. This suggests that the question of rebalancing the neartermist:longtermist funding ratio is genuinely controversial, as opposed to there being a community consensus either way.)
It looks like Open Phil's approach to this is to evaluate all programs (neartermist and longtermist) against cash transfers (they use an internal 'unit of impact', I think, to try to compare all programs). As I understand it, any program funded by Open Phil needs to beat this standardised bar - it isn't that they actually believe longtermist projects are much more impactful but still grant to neartermist causes for political reasons or similar. Or, to put it another way, the bar in neartermist funding isn't lower than it is in longtermist funding.
Accordingly, they wouldn't necessarily reallocate funds from one place to another in advance, but if the new longtermist applications seem to be more impactful in expectation that the neartermist ones, they might choose to fund more longtermist programs than neartermist ones going forward.
For what it's worth, things like the GiveWell charities actually perform extraordinarily well in this analysis, so my prior is that FTX-funded projects won't outperform them significantly in Open Phil's evaluation, and so won't lead to a reallocation of funding.
I've read this comment a few times, and I don't understand what it is saying.
Just to make it more concrete, suppose that the estimated cost-effectiveness for global health (G) and speculative longtermist (L) projects look like:
For example, this could look like:
Then what are you saying applies to this list of marginal impacts?
Question for Holden Karnofsky:
What do EA and Holden Karnofsky think of a claim by Kerry Vaughan that Sam Bankman-Fried did severely unethical behavior before and EA and FTX covered it up and laundered his reputation, effectively getting away with it.
I'm posting because of true, this suggests big changes to EA norms are necessary to deal with bad actors like him, and that Sam Bankman-Fried should be outright banned from the forum and EA events.
Link to tweets here:
Comment on the phrasing but not the substance of what you're saying:
IMO, "malevolent" is a bad phrase for what I think you might mean. (To my ears, "malevolent" has the connotations of wanting to do something bad for consciously-selfish reasons or wishing bad things upon others. That's different from being very strategic about one's actions in interpersonal situations, being comfortable with lying, etc.)
(FWIW I found "dark triad" more jarring and skepticism-provoking than if you'd just said "malevolent", since I take it more seriously as a contentful attempt at psychological diagnosis, and therefore not the kind of thing I expect to be casually dropped into an otherwise-unrelated comment.
If you want a vaguer term, some common options include "bad actors" or "people acting in bad faith". "Dark-triad-ish people" would also have made more sense to me and made me way less skeptical on a first read.)
Importantly, in 2007 the OP engaged in "anonymous and deceptive online promotion" as part of their efforts to promote GiveWell, while being GiveWell's executive director (after being caught, they were demoted to program officer).
(If no one mentioned this here, I would consider it to be evidence for lack of integrity of the EA community.)
It's probably worth noting that Holden has been pretty open about this incident. Indeed, in a talk at a Leaders Forum around 2017, he mentioned it precisely as an example of "end justify the means"-type reasoning.
It's also listed under GiveWell's Our Mistakes page.
I don't mean to endorse Holden's actions - they were obviously ill-judged - but this reads as pretty lightweight stuff. He posted a few anonymous comments boosting GiveWell? That is so far away from what it increasingly looks like SBF is responsible for - multi-billion dollar fraud, funneling customer funds to a separate trading entity against trumped-up collateral, and then running an insolvent business, presumably waiting for imminent Series C funding to cover the holes.
The fact that some FTX people did terrible stuff presumably doesn't mean that we should lower our standards; so I'm not sure what the point of the comparison here is.
We don't want to shrug our shoulders at all bad behavior that falls short of multibillion-dollar fraud, and I took ofer to be making a local point "be mindful that other EAs have screwed up on honesty, and don't treat us (or specifically Holden) like flawless authorities here even if we're community leaders giving confident moral advice", not drawing an equivalence between FTX and the GiveWell astroturfing.
It's a mistake, but I don't think an egregious one, and he's owned it ever since. I think you are being a bit prim. People make mistakes, and learn from them - that's life. This was 15 years ago, and he's done an awful lot of good since. I don't know why you think publicly dragging him through the mud is right or helpful.
Are you Holden? (sorry, couldn't resist)
Yes. To be clear, I agree with you re Holden's mistake not being egregious, and him learning from it and doing a lot of good after etc. Was aiming at a little comedic relief. [I feel like we need emoji reacts here.]
Do you think OP should have a disclaimer about this incident in perpetuity?
If not, it's been 15 years. When do you propose the cutoff would be?
I am curious about the impact on allocating funding between worldviews. The substantial reduction in longtermist funding should raise the value of the marginal longtermist grant, and thus change the optimal allocation between longtermism, global health, and animals. But does the worldview-diversification type approach preclude this sort of reallocation as the funding situation in a cause area changes?
Not the OP, but my sense is that the worldview diversification edifice is really not set up to deal with this kind of situation.
I feel a bit disturbed that there doesn't seem to be an apology here.
I had previously assumed that Open Philanthropy had responsibility for overseeing much of the SBF-EA connection and promotion.
Can you please make it clear if you feel like Open Philanthropy had any responsibility for the situation? Was Open Phil "owning" the responsibility? Was someone else?
Also, I just want to flag that I really like the vote/agreement system used here. Seems like people thought the question was useful(I assume), and generally think that Open Philanthropy didn't have responsibility here. That seems good to know!
If they were coupled, I would probably have felt more attacked.
Why did you assume this? Serious question. I was under the (perhaps incorrect) impression that Open Phil doesn't consider itself responsible for overseeing the EA community.
To me some of the actors who seem like they should have had relevant responsibility here are CEA, 80K, and senior staff at the FTX Future Fund before they joined it.
SBF was a board member, previous employee/friend, and I believe a major funder, of CEA. 80k was sponsored by CEA and really doesn't seem well placed to be making calls like this.
Also, generally, more of the "very senior and trusted EAs" seem to be at Open Philanthropy.
Open Philanthropy has been in charge of funding (including groups like CEA), so they generally seem like the most high-up and ultimately responsible org. The relationship with FTX was about as large a project as we had in EA, so I assumed the institution with the most power and authority was handling or overseeing it to some extent.
I wrote about the future fund in my other comment.
80K promoted SBF uncritically to a large audience and highlighted him as a positive example for years (while also being well placed to know about the 2018 Alameda blowup) so I think it's fair to say that they have some non-zero level of responsibility in the EA-SBF connection and promotion.
I see. Thanks for sharing. I think it's good to find out what expectations people had of different actors.
My expectations were that Open Phil is a family foundation with very large overlaps with the EA community and its interests including funding some parts of it, but it's not fundamentally an actor with responsibility over the EA community's decision making, especially nebulous and complex things like EA's connections with a different billionaire. A lot of people the EA community considers leaders are at Open Phil, but I consider that pretty different from Open Philanthropy as an organization having responsibility for EA decision making. I'm not sure what, if anything, it should have done differently in this case.
Since writing this, I've realized that there probably is a lot more legal consideration regarding these announcements than I initially realized.
"Responsibility" is easily a legal term, so seems potentially hazardous to write about online, in this sort of situation. One of the absolute last things I want to see now would be the other EA funders having to get involved in a prolonged legal conflict.
If anything like this is the case, it could be safe to delay this sort of discussion until much later.
I really wish some of the key questions about this situation could be publicly figured out sooner, but here other things might likely take higher precedence.
All that said, after the key immediate disasters are tied up, I would be very interested in some discussion of which orgs held responsibility for this situation, if any did. I think work here could really help make/secure trust in these organizations. (This might be very obvious)
Surely it's the people working for the FTX Foundation who were the connection between FTX and EA.
I think the Future Fund AI Worldview Prize is (was?) pretty critical for helping to determine resource allocation in EA. Can OpenPhil commit to seeing it through to completion? (Perhaps offering smaller prizes.)
It seemed to me like the way the prize was presented and constructed was aimed at specifically changing Nick Beckstead's views without giving much consideration for being universally convincing. Given that he's stepped down from the Future Fund, why do you think the prize is critical?
Because, to a first approximation, most of the leading EA grantmakers have the same views as Beckstead on this (indeed, Beckstead was in charge of longtermist grantmaking at OpenPhil before the Future Fund).
Maybe the majority of the top 5 grantmakers by size of pot they control? The mainstream view amongst the largest grantmakers seems to be that doom won't happen by default (following e.g. Carlsmith's report), whereas I share the opposite intuition (as do you I think).
Writing since I haven't seen this mentioned elsewhere, but it seems like it might be a good idea to do (and announce that you are doing ) a rapid evaluation of grantee organizations that received a majority of their funding from FF in order to provide emergency funding to the most promising in order to avoid loss of institutions. If this is something OP plans on doing, it should do so quickly and unambiguously.
I'm imagining something like a potentially important org has lost its funding and employees will soon begin looking for and accepting other opportunities. If they do leave, it could be very difficult to get them back or find suitable replacements. If whole organizations cease operations, it could set back work in their areas substantially since momentum will be lost, future organizations will have to deal with answering why this similar org didn't work out, the ability to make credible commitments in the org's given field will be at risk if they suddenly drop projects, and institutional knowledge will be lost. Similar to how other countries supplemented employee salaries instead of the US's unemployment insurance approach during the pandemic.
Also for disclosure: I haven't received any FF funding nor work in an org that did.
I'm very curious about whether and which other smaller funders will fill-in gaps from the FTX Future Fund in the next one to three months.
This is very helpful and transparent.
Thank you for sharing this with community and emphasizing the role of integrity for effective altruists.
Are you able to clarify how many resources are already in foundations? (And would that be Open Phil and Good Ventures, or is the bulk of the money that Open Phil "has" technically the money that's in Good Ventures)?
Nathan Young et al forecasted the following here on November 8th:
(I haven't read all the comments yet, so forgive me if this has already been asked.)
Brilliant post, and much needed. Thank you.
Effective Altruism will need a rebranding. I anticipate it will be challenging to discuss the topic without SBF/FTX coming up and I'm afraid it will discourage new participants.
I strongly disagree -- first, because this is dishonest and dishonorable. And second, because I don't think EA should try to have an immaculate brand.
Indeed, I suspect that part of what went wrong in the FTX case is that EA was optimizing too hard for having an immaculate brand, at the expense of optimizing for honesty, integrity, open discussion of what we actually believe, etc. I don't think this is the only thing that was going on, but it would help explain why people with concerns about SBF/FTX kept quiet about those concerns. Because they either were worried about sullying EA's name, or they were worried about social punishment from others who didn't want EA's name sullied.
IMO, trying super hard to never have your brand's name sullied, at the expense of ordinary moral goals like "be honest", tends to sully one's brand far more than if you'd just ignored the brand and prioritized other concerns. Especially insofar as the people you're trying to appeal to are very smart, informed, careful thinkers; you might be able to trick the Median Voter that EA is cool via a shallow PR campaign and attempts to strategically manipulate the narrative, but you'll have a far harder time trickin... (read more)
Rob - I strongly agree with this.
Every Fortune 500 company, sooner or later, faces some massive PR crisis. Very few change the name of the company, their brands, or their products. It's worth thinking about why they don't.
Partly this is because of the recognition heuristic: much of the value of the company and brand is simply in the name recognition in the minds of consumers, investors, suppliers, and workers -- even apart from the emotional valence (positive of negative) attached to the company/brand.
EA has built up a moderate amount of recognition worldwide as a 'brand' of ethical thinking and cause prioritization. If we abandon the EA name, we lose the recognition benefits in millions of brains.
Valences attached to a name (like EA) fluctuate a lot over time, but recognition tends to remain. Remember in the 1990s, Microsoft and Apple were widely vilified for anti-competitive practices, but they're still both leading tech companies with largely positive associations. Political parties can be tarnished by corrupt or incompetent leaders, but their name recognition remains.
Rebranding in response to a scandal suggests an attempt to brush the issue under the rug without dealing with the underlying problems. Surely you want to be able to respond “this is how we changed to prevent that happening again,” not “we were hoping you wouldn’t remember that”?
Does OpenPhil have proof of reserves? Seems like it would be good reassurance for the EA community to see that significant funds are under independent legal control from their source (which was not the case with the FTX Foundation!)
The assets of the Good Ventures Foundation (who Open Phil is recommending their grants to) are a matter of public record (albeit delayed). They had more than $3bn in June 2020.
Very glad you're emphasizing that last question! I can easily see the narrative shift from 'SBF/FTX did unethical stuff' to 'EA people think the end always justify the means', even though shallow utilitarian calculus that ignores all second-order effects rarely holds up (e.g. doctors killing patients if they can save more lives by harvesting their organs being normalized would lead to a paranoid dystopia where everyone fears hospitals. Even the purest of utilitarians shouldn't support this).
However, for someone less familiar with EA this overgeneralization... (read more)
Forgive me if there's a structural reason why this wouldn't work. But why weren't you saving a larger share of the money coming in, to provide a buffer in case funding dropped off for whatever reason? Seems like part of the underlying issue here was assuming that funding levels would remain constant in the future