Hide table of contents

Ever since the fall of FTX and SBF I have been very anxious about the future of EA and have been worrying about stagnation/collapse of EA. I have just been worrying that it will be impossible to get new EA members and that a lot of existing members will want to abandon it. I really love this community and therefore I do not want to see it collapse. Could anybody come up with some counterarguments to these anxieties? Like a kind of collaborative cognitive behavioral therapy? I don’t have any friends who are into EA, so this would really help me out. 


 

9

0
0

Reactions

0
0
New Answer
New Comment


7 Answers sorted by

Have a listen to Sam Harris's most recent podcast, and Matt Yglesias's Substack post about the FTX collapse. I think both of those takes expressed some level of distance and discomfort with EA as a whole. Sam Harris called it "cultish", and Matt Yglesias was also uneasy about various aspects of the community.

But both of them said that the appeal of EA's original motivating ideal remains very clear and strong, no matter how badly the community itself has generated problems. Specifically, trying to figure out how to do good more effectively using reason and evidence is a robustly good goal to aim for. I think the appeal of that goal is very wide, and will remain regardless of the reputation or health of the EA community.

I suspect the health of the community is really up to how we all respond to the present crisis in the coming months. If the community responds well, with humility and a readiness to learn a lesson, adjust, and pivot appropriately, that will increase the odds we're able to overcome the current crisis. If we don't learn from it, odds of survival are lower, and that would probably be for the best. I have an impulse to link to one of my favorite takes on what specifically the community needs to do to improve. But I think it's best I refrain, and focus on the meta-claim that it is critically important we do need to learn and be willing to make very radical pivots if we are to (a) survive and (b) deserve surviving.

To get a bit more concrete, people will be more a bit more wary of the community, but I think that's probably healthy. Frankly, I think outsiders are blaming SBF specifically, and crypto generally, more than they are currently blaming EA, and if that continues, I don't think people will be unduly wary, and thus, I think if EA can take genuine steps to appropriately course-correct, it will retain its ability to attract new people to its causes. 

I am optimistic the community can learn. Several of EA's most prominent leaders pivoted over the last decade toward longtermist cause areas. We've gone from (in the early 2010s) focusing on funding charities that help people living now, to (in the mid 2010s) framing EA as somewhat equally divided between x-risk, global poverty, animal welfare, and meta-EA, and then (from the late 2010s to now) developing a primary focus on x-risk and longtermism. The evolution of EA over that time involved substantial changes at each step. I am not not saying whether I think they should pivot away from longtermism specifically as a reaction to the current crisis. But seeing the community and its leaders pivot over the last decade or so then gives me some hope they are able to do it again.

In summary I think the original ideal EA identified are highly attractive and likely to remain strong. The present cause areas and many of the more established institutions are also likely to remain funded and to make solid progress. I think the community does need to learn from the current crisis; if it does not, it might not recover, and might not deserve to recover. But the community has made changes in its focuses in the past, and that gives me hope we can do it again.

I wouldn't worry. Even on the most base level, there's still $400 mill per annum of Open Phil money up for grabs, so don't worry, I don't think the EA community is going anywhere in a hurry.

I have just been worrying that it will be impossible to get new EA members

I find this pretty reassuring:

While the majority of news about EA is negative right now, interestingly GWWC has had more trial members join halfway through November than the entirety of September and October and 10 times more than November last year.

https://www.givingwhatwecan.org/about-us/members

I don't want to say all publicity is good publicity on balance or that we shouldn't learn from our mistakes, but I really do think people are generally a lot more worried about the future of EA right now than they should be.

EA has been on an upward trend for quite some time, regardless of what's happening in the current news cycle. Furthermore, most news coverage in reputable papers I've seen didn't even mention EA, and that which did often presented EA as a victim in this fraud, rather than a perpetrator.

A couple of excerpts from Geoffrey Miller's post in case you haven't seen it:

We are part of this week's monetizable outrage narrative. Every other week, ever since the development of the 24-hour news cycle, has had its own monetizable outrage narrative. If you've never been part of an outrage narrative before, welcome to the club. It sucks. It leaves scars. But it is survivable. (Speaking as someone who has survived my own share of public controversy, cancellation, and outrage narratives, and who has worked in several academic subfields that are routinely demonized by the press.)

Also, haters gonna hate.

...

There will come a time, maybe in the 2050s, when you may be sitting in front of a cheerful Christmas fireplace, a grandkid bouncing on your knee, and your adult kids may ask you to tell them once more the tale of the Great FTX Crisis of 2022, and how it all played out, and died down, and how EA survived and prospered. You won't remember all the breathless EA forum posts, the in-fighting, and the crisis management. You'll just remember that you either kept your faith in the cause and the community -- or you didn't. 

And here's another person who's "survived [his] own share of public controversy, cancellation, and outrage narratives": Peter Singer.

Hell, he even sparked this movement.

I empathize with you - specifically thinking more about EA since the fall of SBF.

I think one thing to note is "customer acquisition" or simple organizational outreach and recruitment. I was "recruited" through EA from on-campus college initiatives. Whereas many of the outlets fomenting ire against the EA community are all written publications in legacy-prestige, old-media entities like the New Yorker, the Economist, NYT, etc. It's a different customer segment funnel.

I think these publications tend to speak to people with more years behind them than ahead of them and might not be as invested in being an idealistic futurist trying to "do good" with 80,000 hours of high impact contributions. 

Additionally, I think it's sagacious to see organizations as living entities to some extent (largely because it's populated by actual organisms). I don't think any organization can maintain a stasis of pristine public image, always in the public consciousness, etc. I think some bit of notoriety is normal periodically.

Lastly, from a CBT perspective I'm seeing some personalization (i.e.  you believe you're responsible for the faith of thousands or millions of EA'ers), control (you think you can directly control through worry), jumping to conclusions (you think many people are associating SBF with EA more than, say, an entrenched crypto ecosystem). Point is, if I were you, I wouldn't trust my thoughts on this one. They're just thoughts. 

Hope this helps.
 

I've also been very upset since the FTX scandal began, and I love this community too. I think you're right that EA will lose some people. But I am not so worried the community will collapse (although it's possible that ending the global EA community could be a good thing). People's memories are short, and all things pass. In one year, I would be willing to bet you there will still be lots of (and still not enough!) good people working on and donating to important, tractable, and neglected causes. There will  still be an EA Forum with lively debates happening, and that arguments about FTX will by that point make up a small fraction of the content. There will be still new people discovering EA and getting inspired by the potential to increase their positive impact in the world.

To be sure, I do think we should be worried* about the future of EA right now. But more in the sense of worried about whether EA can remain true to its core values and ideals going forward than about whether it can survive in some form.

--

*Note that when I say "we should be worried", I actually mean "we should be putting careful attention toward" rather than "we should be consumed by anxiety about". Be kind to yourself, and if you're feeling more of the latter, now may be a good time to double down on self-care.

Comments2
Sorted by Click to highlight new comments since:

I’ll still be around and surely many others. If EA collapses, we can meet up and start a new one. 🤗

I'd appreciate that too

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal