This is a frame that I have found useful and I'm sharing in case others find it useful.
EA has arguably gone through several waves:
Waves of EA (highly simplified model — see caveats below) | |||
First wave | Second wave | Third wave | |
Time period | 2010[1]-2017[2] | 2017-2023 | 2023-?? |
Primary constraint | Money | Talent |
??? |
Primary call to action | Donations to effective charities | Career change | |
Primary target audience | Middle-upper-class people | University students and early career professionals | |
Flagship cause area | Global health and development | Longtermism | |
Major hubs | Oxford > SF Bay > Berlin (?) | SF Bay > Oxford > London > DC > Boston |
The boundaries between waves are obviously vague and somewhat arbitrary. This table is also overly simplistic – I first got involved in EA through animal welfare, which is not listed at all on this table, for example. But I think this is a decent first approximation.
It’s not entirely clear to me whether we are actually in a third wave. People often overestimate the extent to which their local circumstances are unique. But there are two main things which make me think that we have a “wave” which is distinct from, say, mid 2022:
- Substantially less money, through a combination of Meta stock falling, FTX collapsing, and general market/crypto downturns[3]
- AI safety becoming (relatively) mainstream
If I had to choose an arbitrary date for the beginning of the third wave, I might choose March 22, 2023, when the FLI open letter on pausing AI experiments was published.
It remains to be seen if public concern about AI is sustained – Superintelligence was endorsed by a bunch of fancy people when it first came out, but they mostly faded away. If it is sustained though, I think EA will be in a qualitatively new regime: one where AI safety worries are common, AI safety is getting a lot of coverage, people with expertise in AI safety might get into important rooms, and where the field might be less neglected.
Third wave EA: what are some possibilities?
Here are a few random ideas; I am not intending to imply that these are the most likely scenarios.
Example future scenario | Politics and Civil Society[4] | Forefront of weirdness | Return to non-AI causes |
Description of the possible “third wave” — chosen to illustrate the breadth of possibilities | There is substantial public appetite to heavily regulate AI. The technical challenges end up being relatively easy. The archetypal EA project is running a grassroots petition for a moratorium on AI. | AI safety becomes mainstream and "spins out" of EA. EA stays at the forefront of weirdness and the people who were previously interested in AI safety turn their focus to digital sentience, acausal moral trade, and other issues that still fall outside the Overton window. | AI safety becomes mainstream and "spins out" of EA. AI safety advocates leave EA, and vibes shift back to “first wave” EA. |
Primary constraint | Political will | Research | Money |
Primary call to action | Voting/advocacy | Research | Donations |
Primary target audience | Voters in US/EU | Future researchers (university students) | Middle-upper class people |
Flagship cause area | AI regulation | Digital sentience | Animal welfare |
Where do we go from here?
- I’m interested in organizing more projects like EA Strategy Fortnight. I don’t feel very confident about what third wave EA should look like, or even that there will be a third wave, but it does seem worth spending time discussing the possibilities.
- I'm particularly interested in claims that there isn't, or shouldn't be, a third wave of EA (i.e. please feel free to disagree with the whole model, argue that we’re still in wave 2, argue we might be moving towards wave 3 but shouldn’t be, etc.).
- I’m also interested in generating cruxes and forecasts about those cruxes. A lot of these are about the counterfactual value of EA, e.g. will digital sentience become “a thing” without EA involvement?
This post is part of EA Strategy Fortnight. You can see other Strategy Fortnight posts here.
Thanks to a bunch of people for comments on earlier drafts, including ~half of my coworkers at CEA, particularly Lizka. “Waves” terminology stolen from feminism, and the idea that EA has been through waves and is entering a third wave is adapted from Will MacAskill, though I think he has a slightly different framing, but he still deserves a lot of the credit here.
- ^
Starting date is somewhat arbitrarily chosen from the history listed here.
- ^
Arbitrarily choosing the coining of the word “longtermism” as the starting event of the second wave
- ^
Although Meta stock is back up since I first wrote this; I would be appreciative if someone could do an update on EA funding
- ^
Analogy from Will MacAskill: Quakers:EA::Abolition:AI Safety
Thanks for taking my comment in the spirit intended. As a noncentral EA it's not obvious to me why EA has little art, but it could be something simple like artists not historically being attracted to EA. It occurs to me that membership drives have often been at elite universities that maybe don't have lots of art majors.
Speaking personally, I'm an engineer and a (unpaid) writer. As such I want to play to my strengths and any time I spend on making art is time not spent using my valuable specialized skills... at least I started using AI art in my latest article about AI (well, duh). I write almost exclusively about things I think are very important, because that feeling of importance is usually what drives me to write. But the result has been that my audience has normally been very close to zero (even when writing on EA forum), which caused me to write much less and, when I do write, I tend to write on Twitter instead or in the comment areas of ACX. Okay I guess I'm not really going anywhere with this line of thought, but it's a painful fact that I sometimes feel like ranting about. But here's a couple of vaguely related hypotheses: (i) maybe there is some EA art but it's not promoted well so we don't see it; (ii) EAs can imagine art being potentially valuable, but are extremely uncertain about how and when it should be used, and so don't fund it or put the time into it. EAs want to do "the most impactful thing they can" and it's hard to believe art is it. However, you can argue that EA art is neglected (even though art is commonplace) and that certain ways of using art would be impactful, much as I argued that some of the most important climate change interventions are neglected (even though climate change interventions are commonplace). I would further argue that artists are famously inexpensive to hire, which can boost the benefit/cost ratio (related: the most perplexing thing to me about EA is having hubs in places that are so expensive it would pain me to live there; I suggested Toledo which is inexpensive and near two major cities, earning no votes or comments. Story of my life, I swear, and I've been thinking of starting a blog called "No one listens to me".)
I noticed that too, but I assumed that (for unknown reasons) it worked better for big shifts (pagan to Christian) than more modest ones. But I mentioned "Protestant to Catholic" specifically because the former group was formed in opposition to the latter. I used to be Mormon; we had a whole doctrine about why our religion made more sense and was the True One, and it's hard to imagine any other sect could've come along and changed my mind unless they could counter the exact rationales I had learned from my church. As I see it, mature consequentialist utilitarianism is a lot like this. Unless you seem to understand it very well, I will perceive your pushback against it as being the result of misunderstanding it.
So, if you say utilitarianism is only fit for robots, I just say: nope. You say: utilitarianism is a mathematical algorithm. I say: although it can be put into mathematical models, it can also be imprinted deeply in your mind, and (if you're highly intelligent and rational) it may work better there than in a traditional computer program. This is because humans can more easily take many nuances into account in their minds than type those nuances into a program. Thus, while mental calculations are imprecise, they are richer in detail which can (with practice) lead to relatively good decisions (both relative to decisions suggested by a computer program that lacks important nuances, and relative to human decisions that are rooted in deontology, virtue ethics, conventional wisdom, popular ideology, or legal precedent).
I did add a caveat there about intelligence and rationality, because the strongest argument against utilitarianism that comes to mind is that it requires a lot of mental horsepower and discipline to be used well as a decision procedure. This is also why I value rules and virtues: an mathematically ideal consequentialist would have no need of them per se, but such a being cannot exist because it would require too much computational power. I think of rules and virtues as a way of computationally bounding otherwise intractable mental calculations, though they are also very useful for predicting public perception of one's actions (as most of the public primarily views morality through the lenses of rules and virtues). Related: superforecasters are human, and I don't think it's a coincidence that lots of EAs like forecasting as a test of intelligence and rationality.
However, I think that consequentialist utilitarianism (CU) has value for people of all intelligence levels for judging which rules and virtues are good and which are not. For example, we can explain in CU terms why common rules such as "don't steal" and "don't lie" are usually justified, and by the same means it is hard to justify rules like "don't masturbate" or the Third Reich's rule that only non-Jewish people of “German or kindred blood” could be citizens (except via strange axioms).
This makes it very valuable from a secular perspective: without CU, what other foundation is there to judge proposed rules or virtues? Most people, it seems to me, just go with the flow: whatever rules/virtues are promoted by trusted people are assumed to be good. This leads to people acting like lemmings, sometimes believing good things and other times bad things according to whatever is popular in their tribe/group, since they have no foundational principle on which to judge (they do have principles promoted by other people, which, again, could be good or bad). While Christians say "God is my rock", I say "these two axioms are my bedrock, which led me to a mountain I call mature consequentialist utilitarianism". I could say much more on this but alas, this is a mere comment in a thread and writing takes too much time. But here's a story I love about Heartstone, the magic gemstone of morality.
For predictive decision-making, choosing actions via CU works better the more processing power you use (whether mental or silicon). Nevertheless, after arriving at a decision, it should always be possible to explain the decision to people without access to the same horsepower. We shouldn't say "My giant brain determined this to be the right decision, via reasoning so advanced that your puny mind cannot comprehend it. Trust me." It seems to me that anyone using CU should be able to explain (and defend) their decision in CU terms that don't require high intelligence to understand. However, (i) the audience cannot verify that the decision is correct without using at least as much computing power, they can only verify that the decision sounds reasonable, (ii) different people have different values which can correctly lead to disagreement about the right course of action, and (iii) there are always numerous ways that an audience can misunderstand what was said, even if it was said in plain and unambiguous language (I suspect this is because many people prefer other modes of thought, not because they can't think in a consequentialist manner.)
Now, just in case I sound a bit "robotic" here, note that I like the way I am. Not because I like sounding like Spock or Data, but because there is a whole life journey spanning decades that led to where I am now, a journey where I compared different ways of being and found what seem to be the best, most useful and truth-centered principles from which to derive my beliefs and goals. (Plus I've always loved computers, so a computational framing comes naturally.)
I think a lot of EAs have an above-average level of empathy and sense of responsibility. My poth (hypothesis) is that these things are what caused them to join EA in the first place, and also caused them to have this anxiety about lives not saved and good not done. This poth leads me to predict that such a person will have had some anxiety from the first day they found out about the disease and starvation in Africa, even if joining EA managed to increase that anxiety further. For me personally, global poverty bothered me since I first learned about it, I have a deep yearning to improve the world that appeared 15+ years before I learned about EA, I don't feel like my anxiety increased after joining EA, and the analysis we're talking about (in which there is a utilitarian justification not to feel bad about only giving 10% of our income) helps me not to feel too bad about the limits of my altruism, although I still want to give much more to fund direct work, mainly because I have little confidence in my ability to persuade other EAs about what I think needs to be done (only 31 karma including my own strong upvote? Yikes! 😳😱)
Is that true? I'm not surprised if military personnel make a lot of art, but I don't expect it from the formal structures or leadership. But, if a military does spend money on art, I expect it's a result of some people who advocated for art to sympathetic ears that controlled the purse strings, and that this worked either because they were persuasive or because people liked art. The same should work in EA if you find a framing that appeals to EAs. (which reminds me of the odd fact that although I identify strongly with common EA beliefs and principles, I have little confidence in my ability to persuade other EAs, as I am often downvoted or not upvoted. I cannot explain this.)
My guess is that it's a combination of