This is a Draft Amnesty Day draft. That means it’s not polished, it’s probably not up to my standards, the ideas are not thought out, and I haven’t checked everything. I was explicitly encouraged to post something unfinished! 
Commenting and feedback guidelines: I’m going with the default — please be nice. But constructive feedback is appreciated; please let me know what you think is wrong. Feedback on the structure of the post is also appreciated. 

I first came across effective altruism as a teenager a few years ago, and the core idea instantly clicked for me after reading one post about it. In this post, I will talk about some ways in which my thinking around doing good has evolved over the years as a young person with a strong interest in making the world better.

 

The emotions I feel when thinking about others’ suffering are less intense. I don’t know if teenage-me would have predicted this. As a child, I remember crying a lot when watching videos on animal suffering, when I first confronted the idea of infinite hell I was depressed for an entire summer, I wanted to give all the money I received on my birthday to people who were less fortunate because I knew they needed it more. 

I think the change is partly from just getting used to it. The first time you confront the horrors of factory farming it is awful but by the hundredth time, it’s hard for my brain to naturally feel the same powerful emotions of sadness and anger. Partly, the change is from starting to believe that it isn’t actually that virtuous to feel strong emotions at others’ suffering. Some of that is from having been in the effective altruism community, where it is easy to feel that what matters are the results of what you do and not the emotions behind what you do. 

I still feel strong emotions of empathy for those who are suffering some of the time when I am feeling particularly introspective and emotional. However, and this is because of being in the effective altruism community, I am much more aware of my own ranking of what the biggest problems are and it is harder for me to direct a lot of empathy towards causes that feel less “big” compared to factory farming, extreme poverty, and existential risk - even though, in absolute terms, the suffering of people living in terrible conditions in rich countries is still massive. 

At the same time, my ability to live according to my values has increased. I haven’t eaten meat in a couple of years whereas as a child and young teenager, this was really difficult for me to do even though I really wanted to be vegetarian. I have more tools now to do what I think is right, and the biggest of them all is having a social community where there are others who take their beliefs seriously and try to do good. 


I am much less willing to try to hack my brain in order to force myself to do and feel things I endorse. I used to be much more ashamed of some of my feelings and actions and felt a strong desire to figure out how to trick my brain into being more willing to sacrifice myself for others, into working all the time and being more ambitious. This involved doing things adjacent to self-deception. This was a really bad idea and caused me lots of pain and frustration.

Instead, the thing that worked for me is acknowledging that I have “selfish” desires, that sometimes I take actions that actively hurt others, and that I have things that I deeply care about besides just maximising the good. Having a better picture of myself and what I actually value allowed me to work with the “altruist” and “selfish” sides of me to do things like be able to enjoy spending money and time on things that make me happy without feeling guilty and then work hard at doing good when it actually came time to work hard, and to figure out the right incentives and habits to reduce my meat intake, not because I “should”, but based on reflecting on what I wanted to do and being kind towards all my wants. 


I am more aware of how my actions affect what I do and value in the future. I care more now about cultivating virtue and taking actions that help me become more the person I want to be. When I first learned about effective altruism, based on my naive understanding, I wanted to apply the calculating mindset to everything. I tried to make most of my substantial decisions based on backchaining from my goals and picking the actions that I thought would give me the biggest probability of getting what I want. I suppose this isn’t an EA thing and more of a life experience thing but this is one way in which my approach to doing good has changed so I am including it here.

One big way in which this affects me is that I began to notice how lying was harmful to my soul even if useful, along with some other less major sins like viewing people as a means to an end. I was thinking about social interactions in terms of what exact things I could say to people in order to squeeze as much value out of them as possible. I now think this is a bad approach to trying to have a positive impact. The world is complicated and trying to optimise hard in every little place is counterproductive, and doing it in some areas like social interactions with other people is just not a wise thing to do regardless of what goal I have. 


I care much more about AI existential risk than I used do. The first EA conference I went to, I was actually shocked by how much everyone else cared about AI x-risk. The main reason I was doubtful was because 1) it is a pretty weird thing to care so much about 2) it is an “interesting” thing to care so much about - especially if you are a techy, very intelligent person in a rich country who likes discussing intellectual topics. 

I would like to think I changed my mind because of actually engaging with the arguments, but I also want to acknowledge that the social incentives were also such that I feel like I would have been considered less cool/intelligent among people I respected if I remained skeptical. Incidentally, this is one reason I am concerned about EA community builders being as aggressive as they currently are in pushing EA cause areas - it makes people like me feel suspicious of the arguments. In fact, even after I started to believe AI x-risk was a big deal, I felt confused and frustrated because I couldn’t figure out how much of my belief was because I was in a cool community of impressive people who believed the same thing. I would like to believe that I have more epistemic defenses against cool communities of impressive people now but I have a suspicion, that as a teenager thrust into that world for the first time, you could have gotten me to believe some untrue things as well if they were similarly high-status. 


I got out of my honeymoon phase with the effective altruism community. After coming across effective altruism, I wanted to tell everyone else about it. I was confused and frustrated when other people seemed less enthusiastic about the idea than I was. Now I am much more chill about talking about effective altruism to new people, I usually prefer to talk about my other interests with cool people at parties. Part of this is that I am personally less excited about my own ability to get random people at parties or my friends who have strong interests already in a particular career path to switch to a high-impact career. Part of it is just because it got boring talking about it compared to more fun and interesting things I enjoy learning about. 

I also feel somewhat less optimistic overall about the effective altruism community than I used to. Partly just because I think despite its potential, it is still just a small group of people and most positive impact will come from people and organisations outside the EA community (people and organisations that we can leverage though). I also think where it has the potential to do substantial amounts of good, it also has the potential to do substantial amounts of harm and one thing that will make the difference is if it continues to care about having good epistemics. Therefore, I am not excited anymore about just “growing” the movement if it doesn’t also help us think more clearly about how to do good better. 


I still care deeply about doing massive amounts of good even if I don’t personally suffer as deeply as I used to when hearing about some of the world’s biggest problems. Somewhere along the way, I noticed that trying to help just was a natural part of myself and a life without trying to do good effectively would feel much less exciting and appealing to me. I wisened up a bit and stopped trying to do good in a naive utilitarian way. I relearned some rules for myself to be more honest and care about epistemics over trying to get people to do things I thought of as valuable. I started to think misaligned AI is the biggest problem for me to try and work on. And I stopped feeling like a teenager with a massive, embarrassing crush on the EA movement. 


 

Comments1


Sorted by Click to highlight new comments since:

I can relate to almost all of these! I wonder if this is a universal experience. Thanks for sharing!

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal