Researching Causality and Safe AI at Oxford.
Previously, founder (with help from Trike Apps) of the EA Forum.
Discussing research etc at https://twitter.com/ryancareyai.
Off the top of my head, I think it could be especially useful to:
Yeah, I'm not trying to stake out a claim on what the biggest risks are.
I'm saying assume that some community X has team A that is primarily responsible for risk management. In one year, some risks materialise as giant catastrophes - risk management has gone terribly. The worst. But the community is otherwise decently good at picking out impactful meta projects. Then team A says "we're actually not just in the business of risk management (the thing that is going poorly), we also see ourselves as generically trying to pick out high impact meta projects. So much so that we're renaming ourselves as 'Risk Management and cool meta projects'". And to repeat, we (impartial onlookers) think that many other teams have been capable of running impactful meta projects. We might start to wonder whether team A is losing their focus, and losing track of the most pertinent facts about the strategic situation.
We decided to rename our team to better reflect the scope of our work. We’ve found that when people think of our team, they mostly think of us as working on topics like mental health and interpersonal harm. While these areas are a central part of our work, we also work on a wide range of other things, such as advising on decisions with significant potential downside risk, improving community epistemics, advising programs working with minors, and reducing risks in areas with high geopolitical risk.
Hmm, it's good that you guys are giving an updated public description of your activities. But it seems like the EA community let some major catastrophes pass through previously, and now the team that was nominally most involved with managing risk, rather than narrowing its focus to the most serious risks, is broadening to include the old stuff, new stuff, all kinds of stuff. This suggests to me that EA needs some kind of group that thinks carefully about what the biggest risks are, and focuses on just those ones, so that the major catastrophes are avoided in future - some kind of risk management / catastrophe avoidance team.
Also Jacob Steinhardt, Vincent Conitzer, David Duvenaud, Roger Grosse, and in my field (causal inference), Victor Veitch.
Going beyond AI safety, you can get a general sense of strength from CSRankings (ML) and ShanghaiRankings (CS).
Nice. If you're looking for a follow-up, Jai's essays What Almost Was and The Copenhagen Interpretation of Ethics are also great.
Edit: link fixed
Putting things in perspective: what is and isn't the FTX crisis, for EA?
In thinking about the effect of the FTX crisis on EA, it's easy to fixate on one aspect that is really severely damaged, and then to doomscroll about that, or conversely to focus on an aspect that is more lightly affected, and therefore to think all will be fine across the board. Instead, we should realise that both of these things can be true for different facets of EA. So in this comment, I'll now list some important things that are, in my opinion, badly damaged, and some that aren't, or that might not be.
What in EA is badly damaged:
What in EA is only damaged mildly, or not at all:
What in EA might be badly damaged:
Given all of this, what does that say about how big of a deal the FTX crisis is for EA? Well, I think it's the biggest crisis that EA has ever had (modulo the possible issue of AI capabilities advances). What's more, I also can't think of a bigger scandal in the 223-year history of utilitarianism. On the other hand, the FTX crisis is not even the most important change in EA's funding situation, so far. For me, most important was when Moskovitz entered the fold, changing the number of EA billionaires went from zero to one. When I look over the list above, I think that much more of the value of the EA community resides in its institutions and social network than in its brand. The main ways that a substantial chunk of value could be lost is if enough trust or motivation was lost, that it became hard to run projects, or recruit new talent. But I think that even though some goodwill and trust is lost, it can be rebuilt, and people's motivation is intact. And I think that whatever happens to the exact strategy of outreach currently used by the EA community, we will be able to find ways to attract top talent to work on important problems. So my gut feeling would be that maybe 10% of what we’ve created is undone by this crisis. Or that we’re set back by a couple of years, compared to where we would be if FTX was not started. Which is bad, but it's not everything.