Hide table of contents

It appears that FTX, whose principals support the FTX Foundation, is in serious trouble. We’ve been getting a lot of questions related to these events.

Edited to add (Nov. 13): based on continuing to follow coverage of this situation, I now think it’s very likely that FTX engaged in outrageous, unacceptable fraud. I am furious at the behavior of FTX leadership. I’m going to take some time to reflect on what this means for effective altruism and the effective altruist community. I’m not sure whether or when I will write up more detailed thoughts, so for now I will just point to a few statements by others whose general sentiments I resonate with:

I’ve made an attempt to get some basic points out quickly that might be helpful to people, but the situation appears to be developing quickly and I have little understanding of what’s going on, so this post will necessarily be incomplete and nonauthoritative.

One thing I’d like to say up front (more on how this relates to FTX below) is that Open Philanthropy remains committed to our longtermist focus areas and still expects to spend billions of dollars on them over the coming decades. We will raise the bar for our giving, and we don’t know how many existing projects that will affect, but we still expect longtermist projects to grow in terms of their impact and output.

Are the funds directed by Open Philanthropy invested in or otherwise exposed to FTX or related entities?

No.

The FTX Foundation has quickly become a major funder of many longtermist and effective altruist organizations. If it stops (or greatly reduces) funding them, how might that affect Open Philanthropy’s funding practices?

If the FTX Foundation stops (or greatly reduces) funding such people and organizations, then Open Philanthropy will have to consider a substantially larger set of funding opportunities than we were considering before.

In this case, we will have to raise our bar for longtermist grantmaking: with more funding opportunities that we’re choosing between, we’ll have to fund a lower percentage of them. This means grants that we would’ve made before might no longer be made, and/or we might want to provide smaller amounts of money to projects we previously would have supported more generously.

Does Open Philanthropy also need to raise its bar in light of general market movements (particularly the fall in META stock) and other factors?

Yes:

  • Our available capital has fallen over the last year for these reasons. That said, as of now, public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations. (I also note that META stock is not as large a part of their portfolio as some seem to assume.) Dustin and Cari still expect to spend nearly all of their resources in their lifetimes on philanthropy that aims to accomplish as much good per dollar as possible.
  • Additionally, the longtermist community has been growing; our rate of spending has been going up; and we expect both of these trends to continue. This further contributes to the need to raise our bar.

As stated above, we remain committed to our focus areas and still expect to spend billions of dollars on them over the coming decades.

So how much might Open Philanthropy raise its bar for longtermist grantmaking, and what does this mean for today’s potential grantees?

We don’t know yet — the news about FTX was sudden, and we’re working to figure things out.

It’s a priority for us to think through how much to raise the bar for longtermist grantmaking, and therefore what kinds of giving opportunities to fund. We hope to gain some clarity on this in the next month or so, but right now we’re dealing with major new information and don’t have a lot to say about what it means. It could mean reducing support for a lot of projects, or for relatively few.

(We don’t have a crisp formalization of “the bar”; instead we have general guidance to grantmakers on what sorts of requests should be generously funded vs. carefully considered vs. rejected. We need to rethink and revise this guidance.)

Because of this, we are pausing most new longtermist funding commitments (that is, commitments within Potential Risks from Advanced Artificial Intelligence, Biosecurity & Pandemic Preparedness, and Effective Altruism Community Growth) until we gain more clarity, which we hope will be within a month or so.

This is a temporary pause as we try to reorient our thinking. There are many potential grantees we expect to ask to wait for a month or so, but are likely to fund in the next three months. It’s not an absolute pause: we will continue to do some longtermist grantmaking, mostly when it is time-sensitive and seems highly likely to end up above our bar (this is especially likely for relatively small grants). Our existing calls for applications will remain open by default; we just will hold off on evaluating incoming applications in most cases, while the pause is in effect.

We’ll also be honoring existing commitments and providing funding that’s needed to avoid costly disruptions to core grantees’ work.

Will Open Phil support FTX Foundation grantees who have financial needs related to these events?

Open Phil will consider grantees whose work falls in one of our focus areas, and evaluate them alongside other opportunities. As mentioned above, we are temporarily pausing most longtermist funding, but continuing to evaluate time-sensitive asks.

How does this impact Open Philanthropy’s Global Health and Wellbeing work?

Given FTX Foundation’s focus on existential risk and longtermism, the most direct impacts are on our longtermist work. We don’t anticipate any immediate changes to our Global Health and Wellbeing work as a result of the recent news.

What do you think of allegations that FTX engaged in fraud and/or other unethical behavior?

I don’t understand the situation very well (and I have no special insight into it – I’ve read the same tweets and news stories as everyone else), and it doesn’t seem that all the facts are in. I will be following the situation as it develops.

Edited to add (Nov. 13): I now think it’s very likely that FTX engaged in outrageous, unacceptable fraud, as now noted at the top of this piece.

Separate from the details of the FTX situation, do you think that fraud could be justified if it raises huge amounts of money for good causes?

No.

I dislike “end justify the means”-type reasoning. The version of effective altruism I subscribe to is about being a good citizen, while ambitiously working toward a better world. As I wrote previously, I think effective altruism works best with a strong dose of pluralism and moderation.

I think this is a common approach to effective altruism, e.g. it is consistent with the effectivealtruism.org intro to effective altruism (where I got the language “being a good citizen, while ambitiously working toward a better world”) and the Centre for Effective Altruism’s Guiding Principles (see “Integrity”). (Also see this post by Eliezer Yudkowsky.)

Comments80
Sorted by Click to highlight new comments since: Today at 6:56 AM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
Jakob
1y115
49
14

Thank you for a good and swift response, and in particular, for stating so clearly that fraud cannot be justified on altruistic grounds.

I have only one quibble with the post: IMO you should probably increase your longtermist spending quite significantly over the next ~year or so, for the following reasons (which I'm sure you've already considered, but I'm stating them so others can also weigh in)

  • IIRC Open Philanthropy has historically argued that a lack of high-quality, shovel-ready projects has been limiting the growth in your longtermist portfolio. This is not the case at the moment. There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started. Stepping in to help these projects bridge the gap until they can find new funding sources looks like an unusually cost-effective opportunity. It may also require somewhat less vetting on your end, which may matter more if you're unusually constrained by grantmaker capacity for a while
  • Temporarily ramping up funding can also be justified by considering likely flow-through effects of acting as an "insurer o
... (read more)

I want to push back on this a tiny bit. Just because some projects got funding from FTX, that doesn't necessarily mean Open Phil should fund them. There's a few reasons for this:

There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started.

  1. When FTX Future Fund was functioning, there was lots more money available in the ecosystem, hence (I think) the bar for receiving a longtermist grant was lower. This money is now gone, and lots of orgs who got FTX funding might not meet OP's bar / the new bar we should have given less resources. So basically I don't think it's sufficient to say 1) they have significant funding gaps, 2) they exist and 3) they've been vetted by people you trust. IMO you need to prove that they're also sufficiently high-quality, which might not be true as FTX was vetting them with a different bar in mind. 

Stepping in to help these projects bridge the gap until they can find new funding sources looks like an unusually cost-effective opportunity. It may also require somewhat less vetting on your end, which may matter more if you

... (read more)

I agree with all three people:

  • Holden is right not to rush into funding FTX Foundation grantees
  • Jakob is right that there are now more shovel-ready projects for Open Phil to fund
  • James is right that Open Phil probably shouldn't fund all those projects, because the reduction in total funding demands a higher funding bar
8
Denkenberger
1y
I think this assumes that the funding rate was appropriate given the presence of FTX. However, if one believes that EA will continue to recruit/produce billionaires, then EA could justify continuing the 2022 spend rate (or even more) despite the implosion of FTX (and reduction in Open Phil resources).

Agree with many of the considerations above - the bar should probably rise somewhat after such a funding shortfall. One way to solve it in practice could be to sit down in the room with the old FTX FF team and ask "which XX% of your grants are you most enthusiastic about and why", and then (at least as an initial hypothesis; possibly requiring some further vetting) plan to fund that. The generalized point I'm trying to make is twofold: 1) that quite a bit of judgement already went into assessing these projects and it should be possible to use that to decide how many of them are above the bar, and 2) because all the other input factors (talent, project idea, vetting) are unchanged,  and assuming a standard shape of the EA production function, the marginal returns to funding should now be unusually high.

 

And David is right that (at least under some reasonable models) if you can predict that your bar will fall in the future, you should probably lower it already. I'm not exactly sure what the requirements would be for the funding bar to have a Martingale property (e.g., does it require some version of risk neutrality, or specific assumptions about the shape of the impact dist... (read more)

Thank you for this timely and transparent post, and for all the additional work I'm sure your team is shouldering in response to this situation.

With Giving Tuesday and general end-of-year giving on the horizon, I think any indication from OPP of new anticipated funding gaps would be useful to the EA community as a whole. It would also be helpful to get a sense as soon as the information is available of what the overall cause area funding distribution in EA is likely to look like after this week.

Question for Holden Karnofsky:

What do EA and Holden Karnofsky think of a claim by Kerry Vaughan that Sam Bankman-Fried did severely unethical behavior before and EA and FTX covered it up and laundered his reputation, effectively getting away with it.

I'm posting because of true, this suggests big changes to EA norms are necessary to deal with bad actors like him, and that Sam Bankman-Fried should be outright banned from the forum and EA events.

Link to tweets here:

https://twitter.com/KerryLVaughan/status/1590807597011333120

In 2018, I heard accusations that Sam had communicated in ways that left people confused or misled, though often with some ambiguity about whether Sam had been confused himself, had been inadvertently misleading while factually accurate, etc. I put some effort into understanding these concerns (but didn’t spend a ton of time on it; Open Phil didn’t have a relationship with Sam or Alameda).

I didn’t hear anything that sounded anywhere near as bad as what has since come out about his behavior at FTX. At the time I didn’t feel my concerns rose to the level where it would be appropriate or fair to publicly attack or condemn him. The whole situation did make me vaguely nervous, and I spoke with some people about it privately, but I never came to a conclusion that there was a clearly warranted (public) action.

Comment on the phrasing but not the substance of what you're saying: 

IMO, "malevolent" is a bad phrase for what I think you might mean. (To my ears, "malevolent" has the connotations of wanting to do something bad for  consciously-selfish  reasons or wishing bad things upon others. That's different from being very strategic about one's actions in interpersonal situations, being comfortable with lying, etc.)

2
Sharmake
1y
I understand, but what is the alternative, exactly?
4
Lukas_Gloor
1y
I use "dark triad traits" or "person who seems interpersonally incorrigible." (If I thought someone were sadistic or particularly spiteful, then I think "malevolent" would also be appropriate.)
5
Sharmake
1y
Alright, I'll edit my comments to use dark triad traits now.

(FWIW I found "dark triad" more jarring and skepticism-provoking than if you'd just said "malevolent", since I take it more seriously as a contentful attempt at psychological diagnosis, and therefore not the kind of thing I expect to be casually dropped into an otherwise-unrelated comment.

If you want a vaguer term, some common options include "bad actors" or "people acting in bad faith".  "Dark-triad-ish people" would also have made more sense to me and made me way less skeptical on a first read.)

Thanks for the post, I appreciate the clarity it brings.

Given FTX Foundation’s focus on existential risk and longtermism, the most direct impacts are on our longtermist work. We don’t anticipate any immediate changes to our Global Health and Wellbeing work as a result of the recent news.

Would it not make sense for Open Phil to shift some of its neartermist/global health funds to longtermist causes?

Although any neartermist:longtermist funds ratio is, in my opinion, fairly arbitrary, this ratio has increased significantly following the FTX event. Thus, seems to me that Open Phil should maybe consider acting to rebalance it.

(I'd be curious to hear a solid counterargument.)

[This comment is no longer endorsed by its author]Reply

Did Open Phil shift funds away from longtermist causes when FTX funds became available?

The FTX Future Fund launched in Feb 2022, and Open Phil were hiring for a program officer for their new Global Health and Wellbeing program in Feb 2022 (see here).

For context, all FTX funds go/went to longtermist causes; Open Phil currently has two grantmaking programs (see here): one in Longtermism and the other one being the Global Health and Wellbeing program that launched - I assume - around Feb '22.

So my guess, though I'm not certain, is that the launches of FTX Future Fund and Open Phil's Global Health and Wellbeing program were linked, and that Open Phil did increase its neartermist:longtermist funding ratio when FTX funds became available.

[This comment is no longer endorsed by its author]Reply

OPP was making grants in the Global Health and Wellbeing space (which includes animal welfare) long before this.

The data exist via their grants database [1] — it doesn't look to me like there was any shift away from longtermism that coincided with SBF/FTX entering the space (if anything, it looks like the opposite could be true in 2022).


  1. Credit to Tyler Muale for data collection ↩︎

-1
RobBensinger
1y
This data point doesn't update me much.

(It's interesting to note that, at present, my above comment is on -1 agreement karma after 50 votes. This suggests that the question of rebalancing the neartermist:longtermist funding ratio is genuinely controversial, as opposed to there being a community consensus either way.)

It looks like Open Phil's approach to this is to evaluate all programs (neartermist and longtermist) against cash transfers (they use an internal 'unit of impact', I think, to try to compare all programs). As I understand it, any program funded by Open Phil needs to beat this standardised bar - it isn't that they actually believe longtermist projects are much more impactful but still grant to neartermist causes for political reasons or similar. Or, to put it another way, the bar in neartermist funding isn't lower than it is in longtermist funding.

Accordingly, they wouldn't necessarily reallocate funds from one place to another in advance, but if the new longtermist applications seem to be more impactful in expectation that the neartermist ones, they might choose to fund more longtermist programs than neartermist ones going forward.

For what it's worth, things like the GiveWell charities actually perform extraordinarily well in this analysis, so my prior is that FTX-funded projects won't outperform them significantly in Open Phil's evaluation, and so won't lead to a reallocation of funding.

I've read this comment a few times, and I don't understand what it is saying. 

Just to make it more concrete, suppose that the estimated cost-effectiveness for global health (G) and speculative longtermist (L) projects look like:

G: G1, G2, G3, G4, ...

L: L1, L2, L3, L4, ...

For example, this could look like:

G: 10 lives per 1k, 1 life per 1k, 1 life per 2k, 1 life per 3k, 1 life per 4k, etc.

L: 1% risk reduction for $1M, 1% risk reduction for $10M, 0.1% risk reduction for $100M, etc.

Then what are you saying applies to this list of marginal impacts?

3
Self_Optimization
1y
If I may: I interpreted Jack as saying that... A) If we call the quality threshold for projects T, T_G = T_L = T.  In addition, project-funding is allocated by whether, for project p, p > T... ...and G vs L allocations are determined by how many projects are (anticipated to) pass this threshold from each category. B) With OP = all projects p such that p was not funded by FTX and is likely to seek Open Philanthropy funding, and L_OP being the longtermist subset of OP,  and L_FTX = all L such that L was funded by FTX and now seeks OP funding: sum_of_funding(p in L_OP such that ((p > T) and (p > x for all x in L_FTX)))  >=  current_OP_longtermist_funding. First of all, OP funding won't currently be reallocated to FTX projects, because better non-FTX projects crowd them out of OP's current allocation for this area.  And also: count(p in OP such that ((p > T) and (p > x for all x in G_FTX))) is large -> count(L_OP > T)/(cout(OP > T)) ~= count((L_OP +FTX) > T)/count((OP + FTX) > T) Second of all, since there are so many projects applying to OPhil which are better than T AND better than the FTX ones... ...the addition of the FTX projects shouldn't significantly affect the proportion of 'good enough' projects which are longtermist... ...meaning that long-term funding allocations between global-health vs longtermist cause-categories also shouldn't be affected.   Of course, I know very little about this topic, having come in from the LessWrong side of things, so I can't comment as to the accuracy of the above claims.

I am curious about the impact on allocating funding between worldviews. The substantial reduction in longtermist funding should raise the value of the marginal longtermist grant, and thus change the optimal allocation between longtermism, global health, and animals. But does the worldview-diversification type approach preclude this sort of reallocation as the funding situation in a cause area changes?

Not the OP, but my sense is that the worldview diversification edifice is really not set up to deal with this kind of situation.

Ofer
1y42
26
22

I dislike “end justify the means”-type reasoning. The version of effective altruism I subscribe to is about being a good citizen, while ambitiously working toward a better world.

Importantly, in 2007 the OP engaged in "anonymous and deceptive online promotion" as part of their efforts to promote GiveWell, while being GiveWell's executive director (after being caught, they were demoted to program officer).

(If no one mentioned this here, I would consider it to be evidence for lack of integrity of the EA community.)

Pablo
1y176
44
0

It's probably worth noting that Holden has been pretty open about this incident. Indeed, in a talk at a Leaders Forum around 2017, he mentioned it precisely as an example of "end justify the means"-type reasoning.

Linch
1y95
32
0

It's also listed under GiveWell's Our Mistakes page.

Geuss
1y80
36
2

I don't mean to endorse Holden's actions - they were obviously ill-judged - but this reads as pretty lightweight stuff. He posted a few anonymous comments boosting GiveWell? That is so far away from what it increasingly looks like SBF is responsible for - multi-billion dollar fraud, funneling customer funds to a separate trading entity against trumped-up collateral, and then running an insolvent business, presumably waiting for imminent Series C funding to cover the holes.

The fact that some FTX people did terrible stuff presumably doesn't mean that we should lower our standards; so I'm not sure what the point of the comparison here is.

We don't want to shrug our shoulders at all bad behavior that falls short of multibillion-dollar fraud, and I took ofer to be making a local point "be mindful that other EAs have screwed up on honesty, and don't treat us (or specifically Holden) like flawless authorities here even if we're community leaders giving confident moral advice", not drawing an equivalence between FTX and the GiveWell astroturfing.

6
Ofer
1y
The comments on the "Ask Metafilter" forum intended to give readers the false impression that they were looking at a genuine interaction between the user "geremiah" (who asked for recommendations for charities) and the user "Holden0" who replied (and recommended GiveWell). Both these users were actually the OP. Additionally, the user "geremiah" replied to someone else: These comments can be seen at this page, which is linked to from the GiveWell page that I linked to.
Geuss
1y30
23
1

It's a mistake, but I don't think an egregious one, and he's owned it ever since. I think you are being a bit prim. People make mistakes, and learn from them - that's life. This was 15 years ago, and he's done an awful lot of good since. I don't know why you think publicly dragging him through the mud is right or helpful.

Are you Holden? (sorry, couldn't resist)

Linch
1y13
21
0

I laughed. 

9
Geuss
1y
I would be  spectacularly stupid if I was. To be honest, I barely know the guy. I'm not even an effective altruist, I'm just interested in some of the movement's first-order research.

Yes. To be clear, I agree with you re Holden's mistake not being egregious, and him learning from it and doing a lot of good after etc. Was aiming at a little comedic relief. [I feel like we need emoji reacts here.]

Do you think OP should have  a disclaimer about this incident in perpetuity?

 If not, it's been 15 years. When do you propose the cutoff would be?

I feel a bit disturbed that there doesn't seem to be an apology here.

I had previously assumed that Open Philanthropy had responsibility for overseeing much of the SBF-EA connection and promotion.

Can you please make it clear if you feel like Open Philanthropy had any responsibility for the situation? Was Open Phil "owning" the responsibility? Was someone else?

There was no one with official responsibility for the relationship between FTX and the EA community. I think the main reason the two were associated was via FTX’s/Sam having a high profile and talking a lot about EA - that’s not something anyone else was able to control. (Some folks did ask him to do less of this.)

It’s also worth noting that we generally try to be cautious about power dynamics as a funder, which means we are hesitant to be pushy about most matters. In particular, I think one of two major funders in this space attacking the other, nudging grantees to avoid association and funding from it, etc. would’ve been seen as strangely territorial behavior absent very strong evidence of misconduct.

That said: as mentioned in another comment, with the benefit of hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”

[anonymous]1y33
12
0

I had previously assumed that Open Philanthropy had responsibility for overseeing much of the SBF-EA connection and promotion.

Why did you assume this? Serious question. I was under the (perhaps incorrect) impression that Open Phil doesn't consider itself responsible for overseeing the EA community.  

To me some of the actors who seem like they should have had relevant responsibility here are CEA, 80K, and senior staff at the FTX Future Fund before they joined it. 

SBF was a board member, previous employee/friend, and I believe a major funder, of CEA. 80k was sponsored by CEA and really doesn't seem well placed to be making calls like this.

Also, generally, more of the "very senior and trusted EAs" seem to be at Open Philanthropy.

Open Philanthropy has been in charge of funding (including groups like CEA), so they generally seem like the most high-up and ultimately responsible org. The relationship with FTX was about as large a project as we had in EA, so I assumed the institution with the most power and authority was handling or overseeing it to some extent.

I wrote about the future fund in my other comment.

[anonymous]1y46
17
0

80K promoted SBF uncritically to a large audience and highlighted him as a positive example for years (while also being well placed to know about the 2018 Alameda blowup) so I think it's fair to say that they have some non-zero level of responsibility in the EA-SBF connection and promotion. 

Also, generally, more of the "very senior and trusted EAs" seem to be at Open Philanthropy.

Open Philanthropy has been in charge of funding (including groups like CEA), so they generally seem like the most high-up and ultimately responsible org. The relationship with FTX was about as large a project as we had in EA, so I assumed the institution with the most power and authority was handling or overseeing it to some extent.

I see. Thanks for sharing. I think it's good to find out what expectations people had of different actors. 

My expectations were that Open Phil is a family foundation with very large overlaps with the EA community and its interests including funding some parts of it, but it's not fundamentally an actor with responsibility over the EA community's decision making, especially nebulous and complex things like EA's connections with a different billionaire. A lot of people the EA community considers leaders are at Open Phil, but I consider that pretty different from Open Philanthropy as an organization having responsibility for EA decision making. I'm not sure what, if anything, it should have done differently in this case. 

[This comment is no longer endorsed by its author]Reply
3[anonymous]1y
After considering this comment and the relationship between Open Philanthropy and the rest of the EA ecosystem more, I'm reconsidering my position about Open Phil's responsibility for EA's relationships with FTX to one of much greater uncertainty. 
3
JoshuaBlake
1y
CEA and 80k are both part of Effective Ventures. As far as I can tell, that means they're legally the same entities (split into UK and USA operations) and largely funded by Open Philanthropy. Several board members are shared between Effective Ventures and the FTX Future Fund. Aside: while fact-checking this comment, I found the 80,0000 donors page which I think is a commendable amount of transparency. Other EA organisations should consider replicating.

Also, I just want to flag that I really like the vote/agreement system used here. Seems like people thought the question was useful(I assume), and generally think that Open Philanthropy didn't have responsibility here. That seems good to know!

If they were coupled, I would probably have felt more attacked. 

Since writing this, I've realized that there probably is a lot more legal consideration regarding these announcements than I initially realized. 

"Responsibility" is easily a legal term, so seems potentially hazardous to write about online, in this sort of situation. One of the absolute last things I want to see now would be the other EA funders having to get involved in a prolonged legal conflict.

If anything like this is the case, it could be safe to delay this sort of discussion until much later.

I really wish some of the key questions about this situation could be publicly figured out sooner, but here other things might likely take higher precedence.

All that said, after the key immediate disasters are tied up, I would be very interested in some discussion of which orgs held responsibility for this situation, if any did. I think work here could really help make/secure trust in these organizations. (This might be very obvious)
 

8
tamgent
1y
I would also be interested in knowing who/which org was "owning" the relationship with FTX... Not to assign blame, but to figure out what the right institutional responsibility/oversight should have been, and what needs to be put in place should a similar situation emerge in future. 

Surely it's the people working for the FTX Foundation who were the connection between FTX and EA.

1
Ozzie Gooen
1y
These people were employed directly by FTX. They weren't well positioned to oversee FTX. (You can't oversee the group that pays you.) It's possible that the individuals did very substantial due diligence before joining (not "should I join this org" due diligence, but "should we decide for all of EA that FTX should be trusted" due diligence), then thought that board members or other groups would oversee FTX. I think they could have been proponents for some parts of this, but not the entire thing. 
3
Aleks_K
1y
Were they? I had the impression that there was an (at least technically) independent organisation, FTX Foundation or FTX Philanthropy, that employed the FTX Future Found team. But of course this might very well be wrong. At least Will MacAskill, though, wasn't an employee of FTX, he described his position as 'unpaid advisor' and his formal roles include being a Professor at Oxford University, trustee of CEA and Director of the Forethought Foundation.

I think the Future Fund AI Worldview Prize is (was?) pretty critical for helping to determine resource allocation in EA. Can OpenPhil commit to seeing it through to completion? (Perhaps offering smaller prizes.)

It seemed to me like the way the prize was presented and constructed was aimed at specifically changing Nick Beckstead's views without giving much consideration for being universally convincing. Given that he's stepped down from the Future Fund, why do you think the prize is critical?

Because, to a first approximation, most of the leading EA grantmakers have the same views as Beckstead on this (indeed, Beckstead was in charge of longtermist grantmaking at OpenPhil before the Future Fund).

6
RobBensinger
1y
Could you quantify "to a first approximation"? My sense is that this claim's truth crucially turns on how much you're approximating.

Maybe the majority of the top 5 grantmakers by size of pot they control? The mainstream view amongst the largest grantmakers seems to be that doom won't happen by default (following e.g. Carlsmith's report), whereas I share the opposite intuition (as do you I think).

4
RobBensinger
1y
I don't think EA longtermist grantmakers tend to have p(doom) as low as the numbers in Joe Carlsmith's report. The thing I wanted you to quantify was "which of Nick Beckstead's views are we talking about, and what range of probabilities do you think longtermist EA grantmakers tend to have?". E.g., my guess would have been that Future Fund staff were a lot more AI-risk-skeptical than longtermist Open Phil staff on average. But if you meant to be making a very weak claim, like "most leading GCR EA grantmakers think p(doom) from AI is below 90%", then I would agree with you.
2
Greg_Colbourn
1y
Interesting. My prior was that OpenPhil and FF had a similar level of AI-risk-skepticism. But I guess Holden and Ajeya at least seem to have updated toward more urgency recently.
8
Anonymous (for unimpressive reasons =[ )
1y
I think this subthread about one person's beliefs are not that important and may be a distraction. While the spending of that amount of money now would probably be bad, it's clear this prize is about informing everyone in EA, and would have had a lot of value about cause prioritization, and truth. This would be valuable to the movement. 
2
Greg_Colbourn
1y
They're doing it! Well, something similar, in early 2023. Thanks OpenPhil!

Writing since I haven't seen this mentioned elsewhere, but it seems like it might be a good idea to  do (and announce that you are doing ) a rapid evaluation of grantee organizations that received a majority of their funding from FF in order to provide emergency funding to the most promising in order to avoid loss of institutions. If this is something OP plans on doing, it should  do so quickly and unambiguously. 

I'm imagining something like a potentially important org has lost its funding and employees will soon begin looking for and accepting other opportunities. If they do leave, it could be very difficult to get them back or find suitable replacements. If whole organizations cease operations, it could set back work in their areas substantially since momentum will be lost, future organizations will have to deal with answering why this similar org didn't work out,  the ability to make credible commitments in the org's given field will be at risk if they suddenly drop projects, and institutional knowledge will be lost. Similar to how other countries supplemented employee salaries instead of the US's unemployment insurance approach during the pandemic. 

Also for disclosure: I haven't received any FF funding nor work in an org that did. 

We hope to gain some clarity on this in the next month or so

...

There are many potential grantees we expect to ask to wait for a month or so, but are likely to fund in the next three months

 

I'm very curious about whether and which other smaller funders will fill-in gaps from the FTX Future Fund in the next one to three months.

4
Ben Millwood
1y
I've e-mailed CEA to ask them if they're engineering any coordination around this.

This is very helpful and transparent.

Thank you for sharing this with community and emphasizing the role of integrity for effective altruists.

public reports of Dustin Moskovitz and Cari Tuna’s net worth give a substantially understated picture of our available resources. That’s because, among other issues, they don’t include resources that are already in foundations.

Are you able to clarify how many resources are already in foundations? (And would that be Open Phil and Good Ventures, or is the bulk of the money that Open Phil "has" technically the money that's in Good Ventures)?

Nathan Young et al forecasted the following here on November 8th:

  • OpenPhil
    • $3 - 6 Bn 80% CI
  • Dustin/Cari
    • $6 - 10 Bn 80% CI

(I haven't read all the comments yet, so forgive me if this has already been asked.)

5
JoshuaBlake
1y
In addition to the above, what Open Philanthropy's exposure looks like would be informative to the wider community IMO. Eg: is it just a fairly standard investment portfolio or are they more exposed to Meta than an index would be.

Brilliant post, and much needed. Thank you.

Effective Altruism will need a rebranding. I anticipate it will be challenging to discuss the topic without SBF/FTX coming up and I'm afraid it will discourage new participants. 

I strongly disagree -- first, because this is dishonest and dishonorable. And second, because I don't think EA should try to have an immaculate brand.

Indeed, I suspect that part of what went wrong in the FTX case is that EA was optimizing too hard for having an immaculate brand, at the expense of optimizing for honesty, integrity, open discussion of what we actually believe, etc. I don't think this is the only thing that was going on, but it would help explain why people with concerns about SBF/FTX kept quiet about those concerns. Because they either were worried about sullying EA's name, or they were worried about social punishment from others who didn't want EA's name sullied.

IMO, trying super hard to never have your brand's name sullied, at the expense of ordinary moral goals like "be honest", tends to sully one's brand far more than if you'd just ignored the brand and prioritized other concerns. Especially insofar as the people you're trying to appeal to are very smart, informed, careful thinkers; you might be able to trick the Median Voter that EA is cool via a shallow PR campaign and attempts to strategically manipulate the narrative, but you'll have a far harder time trickin... (read more)

Rob - I strongly agree with this. 

Every Fortune 500 company, sooner or later, faces some massive PR crisis. Very few change the name of the company, their brands, or their products. It's worth thinking about why they don't.

Partly this is because of the recognition heuristic: much of the value of the company and brand is simply in the name recognition in the minds of consumers, investors, suppliers, and workers -- even apart from the emotional valence (positive of negative) attached to the company/brand. 

EA has built up a moderate amount of recognition worldwide as a 'brand' of ethical thinking and cause prioritization. If we abandon the EA name, we lose the recognition benefits in millions of brains. 

Valences attached to a name (like EA) fluctuate a lot over time, but recognition tends to remain. Remember in the 1990s, Microsoft and Apple were widely vilified for anti-competitive practices, but they're still both leading tech companies with largely positive associations. Political parties can be tarnished by corrupt or incompetent leaders, but their name recognition remains.  

Ree
1y30
12
0

Rebranding in response to a scandal suggests an attempt to brush the issue under the rug without dealing with the underlying problems. Surely you want to be able to respond “this is how we changed to prevent that happening again,” not “we were hoping you wouldn’t remember that”?

4
Sharmake
1y
More importantly, this type of not remembering is exactly what led to this crisis, as a smaller failure likely happened, Kerry Vaughan provides the details. https://twitter.com/KerryLVaughan/status/1590807597011333120 https://forum.effectivealtruism.org/posts/xafpj3on76uRDoBja/the-ftx-future-fund-team-has-resigned-1?commentId=GoDd83K7ipktDtWWs#GoDd83K7ipktDtWWs

Very glad you're emphasizing that last question! I can easily see the narrative shift from 'SBF/FTX did unethical stuff' to 'EA people think the end always justify the means', even though shallow utilitarian calculus that ignores all second-order effects rarely holds up (e.g. doctors killing patients if they can save more lives by harvesting their organs being normalized would lead to a paranoid dystopia where everyone fears hospitals. Even the purest of utilitarians shouldn't support this).

However, for someone less familiar with EA this overgeneralization... (read more)

4
HaydenW
1y
The overgeneralisation isextremely easy to make. Just search "effective altruism" on twitter right now. :'( (n.b., not recommended if you care about your own emotional well-being.)

Does OpenPhil have proof of reserves? Seems like it would be good reassurance for the EA community to see that significant funds are under independent legal control from their source (which was not the case with the FTX Foundation!)

The assets of the Good Ventures Foundation (who Open Phil is recommending their grants to) are a matter of public record (albeit delayed). They had more than $3bn in June 2020.

3
Guy Raveh
1y
GV is basically controlled by Moskovitz and Tuna, who are on its board. This is a much better situation than SBF's - the money here is committed to charity and can't just be taken for other goals - but it still has the problem of "the course of EA is almost entirely decided by two wealthy individuals".
9
David Mathers
1y
If they can't legally just take the money back, how do they control where it goes exactly? Can they sack the other board members? 
3
Guy Raveh
1y
There's just one additional board member. (Source)
8
David Mathers
1y
Ah! The simplest possible way! We should probably encourage them to add another 2: I am not so cynical as to think that no one is willing to give up control, so it might work. 

Forgive me if there's a structural reason why this wouldn't work. But why weren't you saving a larger share of the money coming in, to provide a buffer in case funding dropped off for whatever reason? Seems like part of the underlying issue here was assuming that funding levels would remain constant in the future

[comment deleted]1y1
0
0