Hide table of contents

Holden Karnofsky has recently shared some reflections on EA and FTX, but they’re spread out and I’d guess that few people have seen them, so I thought it could be useful to collect them here. (In general, I think collections like this can be helpful and under-supplied.) I've copied some comments in full, and I've put together a simpler list of the links in this footnote.[1]

These comments come after a few months — there’s some explanation of why that is in this post and in this comment.

Updates after FTX

I found the following comment (a summary of updates he’s made after FTX) especially interesting (please note that I’m not sure I agree with everything): 

Here’s a followup with some reflections.

Note that I discuss some takeaways and potential lessons learned in this interview.

Here are some (somewhat redundant with the interview) things I feel like I’ve updated on in light of the FTX collapse and aftermath:

  • The most obvious thing that’s changed is a tighter funding situation, which I addressed here.
  • I’m generally more concerned about the dynamics I wrote about in EA is about maximization, and maximization is perilous. If I wrote that piece today, most of it would be the same, but the “Avoiding the pitfalls” section would be quite different (less reassuring/reassured). I’m not really sure what to do about these dynamics, i.e., how to reduce the risk that EA will encourage and attract perilous maximization, but a couple of possibilities:
    • It looks to me like the community needs to beef up and improve investments in activities like “identifying and warning about bad actors in the community,” and I regret not taking a stronger hand in doing so to date. (Recent sexual harassment developments reinforce this point.).
    • I’ve long wanted to try to write up a detailed intellectual case against what one might call “hard-core utilitarianism.” I think arguing about this sort of thing on the merits is probably the most promising way to reduce associated risks; EA isn’t (and I don’t want it to be) the kind of community where you can change what people operationally value just by saying you want it to change, and I think the intellectual case has to be made. I think there is a good substantive case for pluralism and moderation that could be better-explained and easier to find, and I’m thinking about how to make that happen (though I can’t promise to do so soon).
  • I had some concerns about SBF and FTX, but I largely thought of the situation as not being my responsibility, as Open Philanthropy had no formal relationship to either. In hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.”
  • I’m not surprised there are some bad actors in the EA community (I think bad actors exist in any community), but I’ve increased my picture of how much harm a small set of them can do, and hence I think it could be good for Open Philanthropy to become more conservative about funding and associating with people who might end up being bad actors (while recognizing that it won’t be able to predict perfectly on this front).
  • Prior to the FTX collapse, I had been gradually updating toward feeling like Open Philanthropy should be less cautious with funding and other actions; quicker to trust our own intuitions and people who intuitively seemed to share our values; and generally less cautious. Some of this update was based on thinking that some folks associated with FTX were being successful with more self-trusting, less-cautious attitudes; some of it was based on seeing few immediate negative consequences of things like the Future Fund regranting program; some of it was probably a less rational response to peer pressure. I now feel the case for caution and deliberation in most actions is quite strong - partly because the substantive situation has changed (effective altruism is now enough in the spotlight, and controversial enough, that the costs of further problems seem higher than they did before).
    • On this front, I’ve updated a bit toward my previous self, and more so toward Alexander’s style, in terms of wanting to weigh both explicit risks and vague misgivings significantly before taking notable actions. That said, I think balance is needed and this is only a fairly moderate update, partly because I didn’t update enormously in the other direction before. I think I’m still overall more in favor of moving quickly than I was ~5 years ago, for a number of reasons. In any case I don’t expect there to be a dramatic visible change on this front in terms of Open Philanthropy’s grantmaking, though it might be investing more effort in improving functions like community health.
  • Having seen the EA brand under the spotlight, I now think it isn’t a great brand for wide public outreach. It throws together a lot of very different things (global health giving, global catastrophic risk reduction, longtermism) in a way that makes sense to me but seems highly confusing to many, and puts them all under a wrapper that seems self-righteous and, for lack of a better term, punchable? I still think of myself as an effective altruist and think we should continue to have an EA brand for attracting the sort of people (like myself) who want to put a lot of dedicated, intensive time into thinking about what issues they can work on to do the most good; but I’m not sure this is the brand that will or should attract most of the people who can be helpful on key causes. I think it’s probably good to focus more on building communities and professional networks around specific causes (e.g., AI risk, biorisk, animal welfare, global health) relative to building them around “EA.”
  • I think we should see “EA community building” as less valuable than before, if only because one of the biggest seeming success stories now seems to be a harm story. I think this concern applies to community building for specific issues as well. It’s hard to make a clean quantitative statement about how this will change Open Philanthropy's actions, but it’s a factor in how we recently ranked grants. I think it'll be important to do quite a bit more thinking about this (and in particular, to gather more data along these lines) in the longer run.

Other recent comments

  • On who held responsibility for the relationship between SBF and EA
    • “There was no one with official responsibility for the relationship between FTX and the EA community. I think the main reason the two were associated was via FTX’s/Sam having a high profile and talking a lot about EA - that’s not something anyone else was able to control. (Some folks did ask him to do less of this.)
      It’s also worth noting that we generally try to be cautious about power dynamics as a funder, which means we are hesitant to be pushy about most matters. In particular, I think one of two major funders in this space attacking the other, nudging grantees to avoid association and funding from it, etc. would’ve been seen as strangely territorial behavior absent very strong evidence of misconduct.
      That said: as mentioned in another comment, with the benefit of hindsight, I wish I’d reasoned more like this: “This person is becoming very associated with effective altruism, so whether or not that’s due to anything I’ve done, it’s important to figure out whether that’s a bad thing and whether proactive distancing is needed.””
  • On whether he knew about unethical behavior by Sam Bankman-Fried
    • “In 2018, I heard accusations that Sam had communicated in ways that left people confused or misled, though often with some ambiguity about whether Sam had been confused himself, had been inadvertently misleading while factually accurate, etc. I put some effort into understanding these concerns (but didn’t spend a ton of time on it; Open Phil didn’t have a relationship with Sam or Alameda).
      I didn’t hear anything that sounded anywhere near as bad as what has since come out about his behavior at FTX. At the time I didn’t feel my concerns rose to the level where it would be appropriate or fair to publicly attack or condemn him. The whole situation did make me vaguely nervous, and I spoke with some people about it privately, but I never came to a conclusion that there was a clearly warranted (public) action.”
  • On a specific claim in the recent TIME article

And there is more in his interview with Vox from January (here are edited highlights). 

(Thanks to the folks who suggested making this post & helped.)

  1. ^

    On why these comments didn't come earlier — post & comment

    Updates post FTX — comment (see also the interview with Vox, edited highlights)

    Responsibility — comment.

    SBF — comment

    Claim from the TIME  article — comment

Comments2
Sorted by Click to highlight new comments since: Today at 1:59 AM

Thanks, Lizka, for highlighting these comments! I'd really like to see others in the EA community, and especially leaders of EA orgs, engage more in public conversations about how EA should change in light of the FTX collapse and other recent events.

I think the events of the last few months should lead us to think carefully about whether future efforts inspired by EA ideas might cause significant harm or turn out to be net-negative in expectation, after accounting for downside risks. I'd like to see leaders and other community members talking much more concretely about how organizations' governance structures, leadership teams, cultural norms, and project portfolios should change to reduce the risk of causing unintended harm.

Holden's reflections collected here, Toby Ord's recent address at EAG, and Oliver Habryka’s comments explaining the decision to close the Lightcone Offices feel to me like first steps in the right direction, but I'd really like to see other leaders, including Will MacAskill and Nick Beckstead, join the public conversation. I’d especially like to see these and other leaders identify the broad changes they would like to see in the community, commit to specific actions they will take, and respond to others’ proposals for reform. (For the reasons Jason explains here, I don't think the ongoing investigation presents any necessary legal impediment to Will or Nick speaking now, and waiting at least another two months to join the conversation seems harmful to the community's ability to make good decisions about potential paths forward.)

My guess is that leaders’ relative silence on these topics is harming the EA community's ability to make a positive difference in the world. I and others I know have been taking steps back from the EA community over the past several months, partly because many leaders haven’t been engaging in public conversations about potential changes that seem urgently necessary. I've personally lost much of the confidence I once had in the ability of the EA community’s leaders, institutions, and cultural norms to manage risks of serious harm that can result from trying to put EA ideas into practice. I’m now uncertain about whether engaging with the EA community is the right way for me to spend time and energy going forward. I think leaders of EA orgs can help restore confidence and chart a better course by starting or joining substantive, public conversations about concrete steps toward reform.

(To end on a personal note: I've been feeling pretty discouraged over the last few months, but in the spirit of Leaning into EA Disillusionment, I aim to write more on this topic soon. I hope others will, too.)

Thanks for gathering these comments!

More from Lizka
Curated and popular this week
Relevant opportunities