8024 karmaJoined Aug 2014


Researching Causality and Safe AI at Oxford

Previously, founder (with help from Trike Apps) of the EA Forum.

Discussing research etc at https://twitter.com/ryancareyai.


Topic Contributions


Putting things in perspective: what is and isn't the FTX crisis, for EA?

In thinking about the effect of the FTX crisis on EA, it's easy to fixate on one aspect that is really severely damaged, and then to doomscroll about that, or conversely to focus on an aspect that is more lightly affected, and therefore to think all will be fine across the board. Instead, we should realise that both of these things can be true for different facets of EA. So in this comment, I'll now list some important things that are, in my opinion, badly damaged, and some that aren't, or that might not be.

What in EA is badly damaged:

  • The brand “effective altruism”, and maybe to an unrecoverable extent (but note that most new projects have not been naming themselves after EA anyway.)
  • The publishability of research on effective altruism (philosophers are now more sceptical about it).
  • The “innocence” of EA (EAs appear to have defrauded ~4x what they ever donated). EA, in whatever capacity it continues to exist, will be harshly criticised for this, as it should be, and will have to be much more thick-skinned in future.
  • The amount of goodwill among promoters of EA (they have lost funds on FTX, regranters have been embarrassed by the disappearance of promised funds, others have to content with clawbacks.), as well as the level of trust within EA, generally.
  • Abundant funding for EA projects that are merely plausibly-good.

What in EA is only damaged mildly, or not at all:

  • The rough amount of people who people want to doing good, effectively
  • The social network that has been built up around doing good, effectively, (i.e. “the real EA community is the friends we made along the way”)
  • The network of major organisations that are working on EA-related problems.
  • The knowledge that we have accumulated, through research and otherwise, about how to do good effectively.
  • Existential risk”, as a brand
  • The “AI safety” research community in general
  • The availability of good amounts of funding for clearly-good EA projects.

What in EA might be badly damaged:

  • The viability of “utilitarianism” as a public philosophy, absent changes (although Sam seems to have misapplied utilitarianism, this doesn’t redeem utilitarianism as a public philosophy, because we would also expect it to be applied imperfectly in future, and it is bad that its misapplication can be so catastrophic).
  • The current approach to building a community to do good, effectively (it is not clear whether a movement is even the right format for EA, going forward)
  • The EA “pitch”. (Can we still promote EA in good conscience? To some of us, the phrase “effective altruism” is now revolting. Does the current pitch still ring true, that joining this community will enable one to act as a stronger force for good? I would guess that many will prefer to pitch more specific things that are of interest to them, e.g. antimalarials, AI safety, whatever.)

Given all of this, what does that say about how big of a deal the FTX crisis is for EA? Well, I think it's the biggest crisis that EA has ever had (modulo the possible issue of AI capabilities advances). What's more, I also can't think of a bigger scandal in the 223-year history of utilitarianism. On the other hand, the FTX crisis is not even the most important change in EA's funding situation, so far. For me, most important was when Moskovitz entered the fold, changing the number of EA billionaires went from zero to one. When I look over the list above, I think that much more of the value of the EA community resides in its institutions and social network than in its brand. The main ways that a substantial chunk of value could be lost is if enough trust or motivation was lost, that it became hard to run projects, or recruit new talent. But I think that even though some goodwill and trust is lost, it can be rebuilt, and people's motivation is intact. And I think that whatever happens to the exact strategy of outreach currently used by the EA community, we will be able to find ways to attract top talent to work on important problems. So my gut feeling would be that maybe 10% of what we’ve created is undone by this crisis. Or that we’re set back by a couple of years, compared to where we would be if FTX was not started. Which is bad, but it's not everything.

Off the top of my head, I think it could be especially useful to:

  1. figure out how not to alienate the employees at these places (based on protesting different values, or using humour, and generally refining the message)
  2. Refine a list of asks, other than just demanding a pause,
  3. Evaluate whether the protests are even net useful in their current form, and
  4. Figure out whether the protests are so small as to be ineffective, and if so, what to do about that.

Yeah, I'm not trying to stake out a claim on what the biggest risks are.

I'm saying assume that some community X has team A that is primarily responsible for risk management. In one year, some risks materialise as giant catastrophes - risk management has gone terribly. The worst. But the community is otherwise decently good at picking out impactful meta projects. Then team A says "we're actually not just in the business of risk management (the thing that is going poorly), we also see ourselves as generically trying to pick out high impact meta projects. So much so that we're renaming ourselves as 'Risk Management and cool meta projects'". And to repeat, we (impartial onlookers) think that many other teams have been capable of running impactful meta projects. We might start to wonder whether team A is losing their focus, and losing track of the most pertinent facts about the strategic situation.


We decided to rename our team to better reflect the scope of our work. We’ve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.

Hmm, it's good that you guys are giving an updated public description of your activities. But it seems like the EA community let some major catastrophes pass through previously, and now the team that was nominally most involved with managing risk, rather than narrowing its focus to the most serious risks, is broadening to include the old stuff, new stuff, all kinds of stuff. This suggests to me that EA needs some kind of group that thinks carefully about what the biggest risks are, and focuses on just those ones, so that the major catastrophes are avoided in future - some kind of risk management / catastrophe avoidance team.

Also Jacob Steinhardt, Vincent Conitzer, David Duvenaud, Roger Grosse, and in my field (causal inference), Victor Veitch.

Going beyond AI safety, you can get a general sense of strength from CSRankings (ML) and ShanghaiRankings (CS).


Nice. If you're looking for a follow-up, Jai's essays What Almost Was and The Copenhagen Interpretation of Ethics are also great. 

Edit: link fixed


Me too, from inside the building. Best of luck Max!


The difference between "building effective altruism" and "community" could use some clarification.

Yes, but you've usually been arguing in favour of (or at least widening the overton window around) elite EA views vs the views of the EA masses, have been very close to EA leadership, and are super disagreeable - you are unrepresentative on many relevant axes.


If there have really been minors raped in EA or serious infringements by high-up EAs, then EA would be repeating some of the worst ills of the Catholic church, secularly. Why would one want any part in such a community.

Load more