Hide table of contents

In the last 2 years:

  • What ideas that were considered wrong[1]/low status have been championed here?
  • What has the movement acknowledged it was wrong about previously?
  • What new, effective organisations have been started?

This isn't to claim that this is the only work that matters, but it feels like a chunk of what matters. Someone asked me and I realised I didn't have good answers.

  1. ^

    Changed in response to comment from @JWS 🔸 

56

0
0
2

Reactions

0
0
2
New Answer
New Comment

4 Answers sorted by

It's very difficult to underrate how much EA has changed over the past two years.

For context, two years ago was 2022 July 30. It was 17 days prior to the "What We Owe the Future" book launch. It was also about three months before the FTX fraud was discovered (but at this time it was massively underway in secret) and the ensuing bankruptcy. We were still at the height of the Big Money Big Longtermism era.

It was also about eight months before the FLI Pause Letter, which I think coincided with roughly when the US and UK governments took very serious and intense interest in AI risk.

I think these two events were really key changes for the EA movement and led to a huge vibe shift. "Longtermism" feels very antiquated now and feels abandoned in the name of "holy crap we have to deal with AI risk occurring within the next ten years". Big Money is out, but we still have a lot of money, and it feels more responsible and somewhat more sustainable now. There are no longer regrantors running around everywhere, for better and for worse.

Many of the people previously working on longtermism have pivoted to "pandemics and AI" and many of the people previously working on pandemic risk have pivoted to "AI x bio intersections". WWOTF captures the current mid-2024 vibe of EA much less than Leopold's "Situational Awareness".

There also has been a massive pivot towards mainstream engagement. Many EAs have edited their LinkedIns to purge that two-word phrase and now barely and begrudgingly admit to being "EA-adjacent". These people now take meetings in DC and engage in the mainstream policy process (whereas previously "politics was the mindkiller"). Many AI policy orgs have popped up or become more prominent as a result. Even MIRI, which had just announced "Death with Dignity" only about three months prior to that date of 2022 July 30, has now given up on giving up and pivoted to policy work. DC is a much bigger EA hub than it was two years ago, but the people working in DC certainly wouldn't refer to it as that.

The vibe shift towards AI has also continued to cannibalize the rest of EA as well, for better and for worse. This trend was already in full swing in 2022 but became much more prominent over 2023-2024. There's a lot less money available for global health and animal welfare work than before, especially if you worked on more weird stuff like shrimp. Shrimp welfare kinda peaked in 2022 and the past two years have unfortunately not been kind to shrimp.

This looks pretty much right, as a description of how EA has responded tactically to important events and vibe shifts. Nevertheless it doesn't answer OP's questions, which I'll repeat:

  • What ideas that were considered wrong/low status have been championed here?
  • What has the movement acknowledged it was wrong about previously?
  • What new, effective organisations have been started?

Your reply is not about new ideas, or the movement acknowledging it was wrong (except about Bankman-Fried personally, which doesn't seem like what OP is asking about), or new organizations.

It seems important, to me, that EA's history over the last two years is instead mainly the story of changes in funding, in popular discourse, and in the social strategy of preexisting institutions. e.g. the FLI pause letter was the start of a significant PR campaign, but all the *ideas* in it would've been perfectly familiar to an EA in 2014 (except for "Should we let machines flood our information channels with propaganda and untruth?", which is a consequence of then-unexpected developments in AI technology rather than of intellectual work by EAs).

I'm not sure I understand the expectations enough about what these questions are looking for to answer.

Firstly, I don't think "the movement" is centralized enough to explicitly acknowledge things as a whole - that may be a bad expectation. I think some individual people and organizations have done some reflection (see here and here for prominent examples), though I would agree that there likely should be more.

Secondly, It definitely seems very wrong to me though to say that EA has had no new ideas in the past two years. Back in 2022 the main answer to "how do we reduce AI risk?" was "I don't know, I guess we should urgently figure that out" and now there's been an explosion of analysis, threat modeling, and policy ideas - for example Luke's 12 tentative ideas were basically all created within the past two years. On top of that, a lot of EAs were involved in the development of Responsible Scaling Policies which is now the predominant risk management framework for AI. And there's way more too.

Unfortunately I can mainly only speak to AI as it is my current area of expertise, but there's been updates in other areas as well. For example, at just Rethink Priorities, welfare ranges, CRAFT... (read more)

underrate

Nit - I'm pretty sure you mean 'overrate'.

So you'd say the major shift is:

  • Towards AI policy work
  • Towards AI x bio policy work

Also this seems notable:

Many EAs have edited their LinkedIns to purge that two-word phrase and now barely and begrudgingly admit to being "EA-adjacent".

Going to take a stab at this (from my own biased perspective). I think Peter did a very good job, but Sarah was right that I don't think this quite answered your question. I think it's difficult to think of what counts as 'generating ideas' vs rediscovering new ones, many new philosophies/movements can generate ideas but they can often be bad ones. And again, EA is a decentral-ish movement and it's hard to get centralised/consensus statements on it.

With enough caveats out of the way, and very much from my biased PoV:

"Longtermism" is dead - I'm not sure if someone has gone 'on record' for this, but I think longtermism, especially strong longtermism, as a driving idea for effective altruism is dead. Indeed, to the extent that AI x-risk and Longtermism went hand-in-hand is gone because AI x-risk proponents increasingly view it as a risk that will be played out in years and decades, not centuries and millenia. I don't expect future EA work to be justified under longtermist framing, and I think this reasonably counts as the movement 'acknowledging it was wrong' in some collective-intelligence sort of way.

The case for Animal Welfare is growing - In the last 2 years, I think the intellectual case for Animal Welfare as a leading, and perhaps the EA cause has actually strengthened quite a bit. Rethink published their Moral Weight Sequence which has influenced much subsequent work, see Ariel's excellent pitch for Animal Welfare to dominate nearttermist spending.[1] On radical new ideas to implement, Matthias' pitch for screwworm eradication sounded great to me, let's get it happening! Overall, Animal Welfare is good and EA continues to be directionally ahead on it, and the source of both interesting ideas and funding in this space, in my non-expert opinion.

Thorstad's Criticism of Astronomical Value - I'm specifically referring to David's sequence of 'Existential Risk Pessimism', which I think is broadly part of the EA-idea ecosystem, even if from a critical perspective. The first few pieces, which argues that actually longtermists should have low x-risk probabilities, and vice versa, was really novel and interesting to me (and I wish more people had responded to it). I think that being able to openly criticise x-risk arguments and defer less is hopefully becoming more open, though it may still be a minority view amongst leadership.

Effective Giving is Back - My sense is that, over the last years, and probably spurred by the FTX collapse and fallout, that Effective Giving is back on the menu. I'm not particularly sure why it left, or what extent it did,[2] but there are a number of posts (e.g. see here, here, and here) that indicate it's becoming a lot more of a thing. This is sort of a corrolary of 'longtermism is dead', people realised that perhaps earning-to-give, or even just giving, is something which is still valuable that a can be a unifying thing in the EA movement. 

There are other things that I could mention but I ran out of time to do so fully. I think there is a sense that there are not as many new, radical ideas as there were in the opening days of EA - but in some sense that's an inevitable part of how social movements and ideas grow and change.
 

  1. ^

    I don't think longtermist spending can avoid the force of his arguments too!

  2. ^

    I'm not sure if effective giving being deprioritised actually happened, or if it was whether that was deliberate strategy or just incentives playing out. So this is just my vibe-take

In terms of changes in status and what people are doing:

  • pivot from AI safety technical research to AI governance policy work
  • pivot from broader biosecurity to intersection of AI and bio
  • adoption of progress studies ideas / adoption of metascience and innovation policy as a priority cause area
  • taking broad-based economic growth seriously rather than a sole focus on randomista development
  • greater general engagement with politics
  • further reduction in focus on effective giving, increased focus on career impact

I don’t think the Global Health and Animal Welfare cause areas have changed too much, but probably get a smaller proportion of attention.

I think focusing on AI explosive growth has grown in status over the last two years. I don't think many people were focusing on it two years ago except Tom Davidson. Since then, Utility Bill has decided to focus on it full-time, Vox has written about it, it's a core part of the Situational Awareness model, and Carl Shulman talked about it for hours in influential episodes on the 80K and Dwarkesh podcasts.

Comments3
Sorted by Click to highlight new comments since:

It would be helpful to understand the context in which these questions arose. 

For instance, one possible origin story is that the questions arose in a discussion of the value of new-cause development / openness to weird and controversial ideas with limited current support / etc. I could see that conversation going on in light of the recent controversies related to Manifest / scientific racism. A helpful response in the context of that conversation would be much different than a helpful response to ~"how has EA changed in the last two years, generally?"

Love this question, and think it's important for us all to consider.

Some considerations for clarification:

  • why say 'considered low status' instead of 'considered wrong' or 'considered wrong by EA Leadership or whatever'.
  • I guess, given EA is somewhat decentralised in terms of claimed ownership, it's hard to say what 'the movement' has acknowledged, but maybe substantial or significant minorities of the movement beginning to champion a new cause/idea would meet the criteria?

How is this edit?

Curated and popular this week
Relevant opportunities