RyanCarey

7736Joined Aug 2014

Bio

Researching Causality and Safe AI at Oxford

Previously, founder (with help from Trike Apps) of the EA Forum.

Discussing research etc at https://twitter.com/ryancareyai.

Comments
1172

Topic Contributions
5

Putting things in perspective: what is and isn't the FTX crisis, for EA?

In thinking about the effect of the FTX crisis on EA, it's easy to fixate on one aspect that is really severely damaged, and then to doomscroll about that, or conversely to focus on an aspect that is more lightly affected, and therefore to think all will be fine across the board. Instead, we should realise that both of these things can be true for different facets of EA. So in this comment, I'll now list some important things that are, in my opinion, badly damaged, and some that aren't, or that might not be.

What in EA is badly damaged:

  • The brand “effective altruism”, and maybe to an unrecoverable extent (but note that most new projects have not been naming themselves after EA anyway.)
  • The publishability of research on effective altruism (philosophers are now more sceptical about it).
  • The “innocence” of EA (EAs appear to have defrauded ~4x what they ever donated). EA, in whatever capacity it continues to exist, will be harshly criticised for this, as it should be, and will have to be much more thick-skinned in future.
  • The amount of goodwill among promoters of EA (they have lost funds on FTX, regranters have been embarrassed by the disappearance of promised funds, others have to content with clawbacks.), as well as the level of trust within EA, generally.
  • Abundant funding for EA projects that are merely plausibly-good.

What in EA is only damaged mildly, or not at all:

  • The rough amount of people who people want to doing good, effectively
  • The social network that has been built up around doing good, effectively, (i.e. “the real EA community is the friends we made along the way”)
  • The network of major organisations that are working on EA-related problems.
  • The knowledge that we have accumulated, through research and otherwise, about how to do good effectively.
  • Existential risk”, as a brand
  • The “AI safety” research community in general
  • The availability of good amounts of funding for clearly-good EA projects.

What in EA might be badly damaged:

  • The viability of “utilitarianism” as a public philosophy, absent changes (although Sam seems to have misapplied utilitarianism, this doesn’t redeem utilitarianism as a public philosophy, because we would also expect it to be applied imperfectly in future, and it is bad that its misapplication can be so catastrophic).
  • The current approach to building a community to do good, effectively (it is not clear whether a movement is even the right format for EA, going forward)
  • The EA “pitch”. (Can we still promote EA in good conscience? To some of us, the phrase “effective altruism” is now revolting. Does the current pitch still ring true, that joining this community will enable one to act as a stronger force for good? I would guess that many will prefer to pitch more specific things that are of interest to them, e.g. antimalarials, AI safety, whatever.)

Given all of this, what does that say about how big of a deal the FTX crisis is for EA? Well, I think it's the biggest crisis that EA has ever had (modulo the possible issue of AI capabilities advances). What's more, I also can't think of a bigger scandal in the 223-year history of utilitarianism. On the other hand, the FTX crisis is not even the most important change in EA's funding situation, so far. For me, most important was when Moskovitz entered the fold, changing the number of EA billionaires went from zero to one. When I look over the list above, I think that much more of the value of the EA community resides in its institutions and social network than in its brand. The main ways that a substantial chunk of value could be lost is if enough trust or motivation was lost, that it became hard to run projects, or recruit new talent. But I think that even though some goodwill and trust is lost, it can be rebuilt, and people's motivation is intact. And I think that whatever happens to the exact strategy of outreach currently used by the EA community, we will be able to find ways to attract top talent to work on important problems. So my gut feeling would be that maybe 10% of what we’ve created is undone by this crisis. Or that we’re set back by a couple of years, compared to where we would be if FTX was not started. Which is bad, but it's not everything.

A huge fraction of the EA community's reputational issues, DEI shortcomings, and internal strife stem from its proximity to/overlap with the rationalist community.

Generalizing a lot,  it seems that "normie EAs" (IMO correctly) see glaring problems with Bostrom's statement and want this incident to serve as a teachable moment so the community can improve in some of the respects above, and "rationalist-EAs" want to debate race and IQ (or think that the issue is so minor/"wokeness-run-amok-y" that it should be ignored or censored). This predictably leads to conflict.

This is inaccurate as stated, but there is an important truth nearby. The apparent negatives you attribute to "rationalist" EAs are also true of non-rationalist old-timers in EA, who trend slightly non-woke, while also keeping arms length from the rationalists. SBF himself was not particularly rationalist, for example. What seems to attract scandals is people being consequentialist, ambitious, and intense, which are possible features of rationalists and non-rationalists alike.

Relatedly, which EA projects have shut down? I suspect it's a much smaller fraction than the ~90% of startup companies that do, and that it should be at least a bit larger than it currently is.

Totally, this is what I had in mind - something like the average over posts based on how often they are served on the frontpage.

Thanks for the response. Out of the four responses to nitpicks, I agree with the first two. I broadly agree about the third, forum quality. I just think that peak post quality is at best a lagging indicator - if you have higher volume and even your best posts are not as good anymore, that would bode very poorly. Ideally, the forum team would consciously trade off between growth and average post quality, and in some cases favouring the latter, e.g. performing interventions that would improve the latter even if they slowed growth. And the fourth, understatement, I don't think we disagree that much.

As for summarising the year, it's not quite that I want you say that CEA's year was bad. In one sense, CEA's year was fine, because these events don't necessarily reflect negatively on CEA's current operations. But in another important sense, it was a terrible year for CEA, because these events have a large bearing on whether CEA's overarching goals are being reached. And this could bear on on what operating activities should be performed in future. I think an adequate summary would capture both angles. In an ideal world (where you were unconstrained by legal consequences etc.), I think an annual review post would note that when such seismic events happen, the standard metrics become relatively less important, while strategy becomes more important, and the focus of discussion then rests on the actually important stuff. I can accept that in the real world, that discussion will happen (much) later, but it's important that it happens.

Several nitpicks:

  • "2022 was a year of continued growth for CEA and our programs." - A bit of a misleading way to summarise CEA's year?
  • "maintaining high retention and morale" - to me there did seem to be a dip in morale at the office recently
  • "[EA Forum] grew by around 2.9x this year." - yes, although a bit of this was due to the FTX catastrophe
  • "Overall, we think that the quality of posts and discussion is roughly flat over the year, but it’s hard to judge." - this year, a handful of people told me they felt the quality had decreased, which didn't happen in previous years, and I noticed this too.
  • "Recently the community took a significant hit from the collapse of FTX and the suspected illegal and/or immoral behaviour of FTX executives." - this is a very understated way to note that a former board member of CEA committed one of the largest financial frauds of all time.

I realise there are legal and other constraints, so maybe I am being harsh, but overall, several components of this post seemed not very "real" or straightforward relative to what I would usually expect from this sort of EA org update.

This update would be more useful if it said more about the main catastrophe that EA (and CEA) is currently facing. For whatever reasons, maybe perfectly reasonable, it seems you chose the strategy of saying little new on that topic, and presenting by and large the updates on CEA's ordinary activities that you would present if the catastrophe hadn't happened. But even given that choice, it would be good to set expectations appropriately with some sort of disclaimer at the top of the doc.

Exciting! 

People have sometimes raised the idea of founding an AI-focused consultancy, to do things like evaluate or certify the safety and fairness of systems. I know you've said you plan to apply, but not perform "deep technical" work, but can you say any more about whether this space is one you've considered getting involved in?

I mostly agree with you, Jonas, but I think you're using the phrase "founder" in a confusing way. I think a founder is someone who is directly involved in establishing an organisation. Contributions that are indirect like Bostrom and Eliezer's, or that come after the organisation is started (like DGB) may be very important, but don't make them founders. I would probably totally agree with you if you just said you're answering a different question: "Who caused EA to be what it is today?"

I think the FTX stuff a bigger deal than Peter Singer's views on disability, and for me to be convinced about the England and enlightenment examples, you'd have to draw a clearer line between the philosophy and the wrongful actions (cf. in the FTX case, we have a self-identified utilitarian doing various wrongs for stated utilitarian reasons).

I agree that every large ideology has had massive scandals, in some cases ranging up to purges, famines, wars, etc. I think the problem for us, though, is that there aren't very many people who take utilitarianism or beneficentrism seriously as an action-guiding principle - there are only ~10k effective altruists, basically. What happens if you scale that up to 100k and beyond? My claim would be that we need to tweak the product before we scale it, in order to make sure these catastrophes don't scale with the size of the movement.

Load More