This is a frame that I have found useful and I'm sharing in case others find it useful.
EA has arguably gone through several waves:
Waves of EA (highly simplified model — see caveats below) | |||
First wave | Second wave | Third wave | |
Time period | 2010[1]-2017[2] | 2017-2023 | 2023-?? |
Primary constraint | Money | Talent |
??? |
Primary call to action | Donations to effective charities | Career change | |
Primary target audience | Middle-upper-class people | University students and early career professionals | |
Flagship cause area | Global health and development | Longtermism | |
Major hubs | Oxford > SF Bay > Berlin (?) | SF Bay > Oxford > London > DC > Boston |
The boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.
It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:
- Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]
- AI safety becoming (relatively) mainstream
If I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.
It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.
Third wave EA: what are some possibilities?
Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.
Example future scenario | Politics and Civil Society[4] | Forefront of weirdness | Return to non-AI causes |
Description of the possible “third wave” — chosen to illustrate the breadth of possibilities | There is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI. | AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window. | AI safety becomes mainstream and "spins out" of EA. AI safety advocates leave EA, and vibes shift back to “first wave” EA. |
Primary constraint | Political will | Research | Money |
Primary call to action | Voting/advocacy | Research | Donations |
Primary target audience | Voters in US/EU | Future researchers (university students) | Middle-upper class people |
Flagship cause area | AI regulation | Digital sentience | Animal welfare |
Where do we go from here?
- I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.
- I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that we’re still in wave 2, argue we might be moving towards wave 3 but shouldn’t be, etc.).
- I’m also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual value of EA, e.g. will digital sentience become “a thing” without EA involvement?
This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.
Thanks to a bunch of people for comments on earlier drafts, including ~half of my coworkers at CEA, particularly Lizka. “Waves” terminology stolen from feminism, and the idea that EA has been through waves and is entering a third wave is adapted from Will MacAskill, though I think he has a slightly different framing, but he still deserves a lot of the credit here.
- ^
Starting date is somewhat arbitrarily chosen from the history listed here.
- ^
Arbitrarily choosing the coining of the word “longtermism” as the starting event of the second wave
- ^
Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding
- ^
Analogy from Will MacAskill: Quakers:EA::Abolition:AI Safety
Well... Communism is structurally disinclined to work in the envisioned way. It involves overthrowing the government, which involves "strong men" and bloodshed, the people who lead a communist regime tend to be strongmen who rule with an iron grip ("for the good of communism", they might say) and are willing to use murder to further their goals. Thanks to this it tends to involve a police state and central planning (which are not the characteristics originally envisioned). More broadly, communism isn't based on consequentialist reasoning. It's an exaggeration to say it's based on South Park reasoning: 1. overthrow the bourgeoisie and the government so communists can be in charge, 2. ???, 3. utopia! But I don't think this is a big exaggeration.
Individuals, on the other hand, can believe in whatever moral system they feel like and follow its logic wherever it leads. Taking care of yourself (and even your friends/family) not only perfectly fits within the logic of (consequentialist) utilitarianism, it is practical because its logic is consequentialist (which is always practical if done correctly). Unlike communism, we can simply do it (and in fact it's kind of hard not to, it's the natural human thing to do).
What's weird about your argument is that you made no argument beyond "it's like the logic of communism". No, different things are different, you can't just make an analogy and stop there (especially when criticizing logic that you yourself described as "perfect" - well gee, what hope does an analogy have against perfect logic?)
I think what's going on here is that you're not used to consequentialist reasoning, and since the founders of EAs were consequentialists, and EA attracts, creates and retains consequentialists, you need to learn how consequentialists think if you want to be persuasive with them. I don't see aesthetics as wasteful; I routinely think about the aesthetics of everything I build as an engineer. But the reason is not something like "beauty is good", it's a consequentialist reason (utilitarian or not) like "if this looks better, I'm happier" (my own happiness is one of my terminal goals) or "people are more likely to buy this product if it looks good" (fulfilling an instrumental goal) or "my boss will be pleased with me if he thinks customers will like how it looks" (instrumental goal). Also, as a consequentialist, aesthetics must be balanced against other things―we spend much more time on the aesthetics of some things than other things because the cost-benefit analysis discounts aesthetics for lesser-used parts of the system.
You want to reform the utilitarian part, but it's like telling Protestants to convert to Catholicism. Not only is it an extremely hard goal, but you won't be successful unless you "get inside the mind" of the people whose beliefs you want to change. Like, if you just explain to Protestants (who believe X) why Catholics believe the opposite of X, you won't convince most of them that X is wrong. And the thing is, I think when you learn to think like a consequentialist―not a naive consequentialist* but a mature consequentialist who values deontological rules and virtues for consequentialist reasons―at that point you realize that this is the best way of thinking, whether one is EA or not.
(* we all still remember SBF around here of course. He might've been a conman, but the scary part is that he may have thought of himself as an consequentialist utilitarian EA, in which case he was a naive consequentialist. For you, that might say something against utilitarianism, but for me it illustrates that nuance, care and maturity is required to do utilitarianism well.)