I want to share a deeply personal and painful journey I've had with the EA movement. It’s not an easy story to tell, but I believe there's value in presenting this side of the coin. I really want to protect my anonymity, so I'd ask you to please be respectful of my wish and to not reach out to me.

Not so long ago, I became wholeheartedly committed to the EA cause. I left a good job after receiving funding to pursue work that resonated with the movement's principles. My belief was so strong that I relocated to another city, eager to make a meaningful impact. A lot of promises were made. A lot of enthusiasm surrounded EA future. 

Then, the unexpected: my main source of funding collapsed. With its downfall, my life spiraled. I felt deserted by the very community I'd given so much to. Nobody reached out; nobody seemed to care. It was a profound isolation I had never anticipated.

This experience plunged me into a severe major depressive episode, one so grave I've grappled with all sorts of dark thoughts. I've now sought treatment for this, but every day is a struggle. For years, I sidelined personal pursuits, including forming meaningful personal and romantic relationships outside the movement, dedicating myself to issues like the potential AI apocalypse and other matters that now seem distant and abstract, when compared to the day-to-day struggles of non-Anglo-American privileged and gifted youngsters. In prioritizing these concerns, I lost sight of the spontaneous, daily realities that give life its texture and meaning.

My experience has also left me deeply disillusioned with EA's principles and strategies. I've become nihilistic, doubting if the movement's approach to the world, as noble as it might seem, is genuinely grounded in reality. There's a detachment I've observed, where some of the most crucial elements of our shared human experience, like the importance of spontaneous everyday moments, seem to get lost.

In sharing this, my hope isn't to condemn or vilify the EA movement but to highlight the dangers of over-commitment and the risk of losing oneself in a cause. While it’s commendable to be passionate, it's essential to remember our humanity, the very thing we’re trying to help and protect. 

66

2
3
7

Reactions

2
3
7
Comments11


Sorted by Click to highlight new comments since:

What do you feel like the community could best do for you going forward?

Thank you for reaching out and asking how the community can support me during this time. Your consideration means a lot, especially given the challenging journey I've been on.

The aftermath of my funder's collapse took a significant toll on me. Despite the well-intentioned promises of assistance for those impacted, I felt ghosted and unsupported. I've now returned to my home country, where I'm navigating a very different set of challenges. Currently, I'm attempting to reinvent myself professionally in a place that feels very removed from the epicenters of EA.

My background, which is humble and without a traditional support system, makes my situation even more precarious. The decision to leave a stable job for the promise of making a meaningful impact has had profound consequences. This choice left me with a strong reluctance to re-enter the traditional workforce, even if my skills are clearly valuable and rare, a sentiment only intensified by my ongoing battle with major depression.

While I value the emotional and social support from the EA community, I also recognize the inherent differences in our day-to-day experiences. The juxtaposition of my current challenges against the backdrop of peers who may be in more favorable situations abroad underscores the solitude of my journey.

The only tangible way the community could assist is by contributing towards some of the expenses I'm currently shouldering, for example, therapy. The sessions are a significant outlay, but they're essential for my well-being, especially as I anticipate continuing them for a while.

Another area of potential support could be in the form of pro-bono mentorship or guidance as I try to reinvent my career as a freelancer/entrepreneur.

Ultimately, while the past cannot be changed, and the challenges are real, the act of reaching out and showing genuine concern provides a measure of comfort. I'm unsure if anything could significantly alter my emotional state right now, but the interest is nonetheless appreciated.

Anytime, I want this to be a community that cares not just for the productive contributors, but also for others who've come in and out of involvement. 

I know this doesn't help with anything financial, but the only thing I can tell you without knowing more is that maybe making non-EA friends could be a point of focus for you right now. Potentially away from family and absent other support structures, it might help you dig yourself out a bit. This may be obvious (and non-helpful if what you need is the how and not the what) but I mention it because I've a feeling of trying to right yourself first before you go out and try to build that structure a couple of times before, where I think having others around for help and care could really be invaluable. 

Sadly can't offer any financial help myself, but happy to talk sometime if you'd like!

I felt deserted by the very community I'd given so much to. Nobody reached out; nobody seemed to care. It was a profound isolation I had never anticipated.

I've felt that too. I didn't have the exact same scenario of losing funding; I lost a job. I hadn't yet built a support network, and without the income from that job I couldn't afford to live in big expensive city. Former colleagues who had previously been friendly never contacted me again; they never said "hey, I saw this job posting that I think you would be good for." People I had interacted with at EA social events didn't contact me. People who had reached out to me for one-on-ones at conferences no longer did so. I assume that they only reached out to me previously because I was associated with a prestigious institution, which made me feel used as a means to an end rather than as an end in myself.

It makes me think about "Diversity is being invited to the party; inclusion is being asked to dance." There were plenty of "parties" that I was able to access (conferences, Slack workspaces, chat groups), but I wasn't "asked to dance."

What I perceive as the lack of welcomingness has made me pretty sad at times.

Sending you virtual hugs.

Yeah… I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.

Only Nonlinear attempted something similar in the EA community. (But I condemn exploitative treatment of employees of course!) Open Phil picked up an AI safety prize contest, and I might miss a few cases. I was very disappointed by how little of this sort happened. Then again I could've tried to start such an effort myself. I don't have the network, so I'm pretty sure I would've failed. I was also in bed with Covid for the first month. 

I suppose it really makes more sense to model EA not as a community but as a scientific discipline. I have a degree in CS, but I wasn't disappointed that the CS community didn't support their own after the FTX collapse because I never had the expectation that that is something that could happen. EA seems to me is better understood within that reference class. (Unfortunately – not because there's something wrong with scientific disciplines but because I would've loved to be part of a real community too.)

I've been part of another community where a few hundred people were scammed out of some $500+ and left stranded in Nevada. (Well, Las Vegas, but Nevada sounds more dramatic.) Hundreds of other people in the community spang into action within hours, donated and coordinated donation efforts, and helped the others at least get back home.

I think if this happened with, say, a conference you would see this kind of response within EA. A group of people stuck in a specific place is very different from the FTX collapse.

There was a supportive response, to some degree, in the wake of FTX:

https://forum.effectivealtruism.org/posts/BesfLENShzSMeb7Xi/community-support-given-ftx-situation

https://forum.effectivealtruism.org/posts/7PqmnrBhSX4yCyMCk/effective-peer-support-network-in-ftx-crisis-update

https://forum.effectivealtruism.org/posts/gbjxQuEhjAYsgWz8T/a-job-matching-service-for-affected-ftxff-grantees

Maybe now is a good time to review that response and figure out what could've been done better. For example, maybe some person or organization could've made a point of reaching out individually to each and every FTX grantee.


For me, the OP resonates well beyond just the FTX stuff though. There's an element of making personal sacrifices for the greater good that exists in EA, which doesn't exist in the same way for an academic discipline. I myself found the lack of supportiveness in EA very alienating, and it's a major reason why I'm not very involved these days.

One idea is something like a "Basefund for mental health", to provide free or low-cost therapy for EAs -- possibly group therapy. EAs have already made the argument that mental health could be an effective cause area. If that's true, "mental health for EAs" could be a doubly effective cause area. Beyond the first-order benefit of improving someone's mental health, you can improve someone's mental health in a way that enables them to do good.

Oh true! I was only thinking of financial support for struggling projects and project developers, but those kinds of support are also super valuable!

Rethink Wellbeing is definitely on board with mental health for EAs being an important cause area. 

I don't think personal identity makes too much sense, so preventing the extinction of EA-related values (or maybe even some wider set of prosocial, procivilizational values) could be an underexplored cause area. Some sort of EA crisis fund could be a way to achieve that, but also archival of important insights and such.

openphil did some lost wages stuff after FTXsplosion, but I think evaluated case by case and some people may have been left behind. 

A quick and impulsive comment.

I am very sorry that this happened to you. I had a somewhat similar experience of disillusion and depression a few years ago. I eventually realised that it was because my life was deeply imbalanced at that time: I focused on and valued work too much and I didn't prioritise my wellbeing and happiness sufficiently. 

Five years later, I feel happier and more productive that I have ever been. I now feel that I needed my burnout to see the error of my ways and develop a better mentality and lifestyle (although I wish it were not the case). I hope that this event will eventually have a similarly positive outcome for you (although I recognise that it may not). I wish you all the best regardless. 

I will also mention that I do think that many EAs could often do more to care for and look out for each other. 

I definitely felt for sometime that if I were not impactful for some reason, like poor mental health, no-one would take much if any time to look out for me. I worried that it wouldn't seem high value enough for my colleagues who already had so much important work to do. I don't feel this way about my friends in community now, which makes me feel much better. 

I think that having a strong sense of social support and security is probably more important than many people in the community realise. It is hard to work comfortably with people if you feel that their care for you is (nearly) entirely conditional on you sharing their values and delivering 'impact'. It is also easy to mistake that sort of collaboration for friendship and I think that this is a common mistake.

I totally agree with this. But while disappointment is sad, the  the global ennui of our times is even worse. I hope you soon resume the pursuit of your own happiness, with the feeling of fulfilled duty. 

I lack something in EA: what about being a farmer, a nurse or a policeman? As utilitarians, there shall be a message for the ample majority of the people, that shall do the "manteinance" work of civilization. For the majority of people, the main message of utilitarianism shall be to find private happiness (in non extractive ways).  And in any case you never know when a "disproportionate impact" opportunity can arrive to your life. 

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal