I don’t want to save the world. I don’t want to tile the universe with hedonium. I don’t want to be cuckolded by someone else’s pretty network-TV values. I don’t want to do anything I don’t want to do, and I think that’s what (bad) EAs, mother Teresa, and proselytizing Christians all get wrong. Doing things because they sound nice and pretty and someone else says they’re morally good suuucks. Who even decided that warm fuzzies, QALYs, or shrimp lives saved are even good axes to optimize? Because surely everyone doesn’t arrive at that conclusion independently. Optimizing such universally acceptable, bland metrics makes me feel like one of those blobby, soulless corporate automata in bad tech advertisements.

I don’t see why people obsess over the idea of universal ethics and doing the prosocial thing. There’s no such thing as the Universal Best Thing, and professing the high virtue of maximizing happiness smacks of an over-RLHFed chatbot. Altruism might be a “virtue”, as in most people’s evolved and social environments cause them to value it, but it doesn’t have to be. The cosmos doesn’t care what values you have. Which totally frees you from the weight of “moral imperatives” and social pressures to do the right thing.

There comes a time in most conscientious, top-of-distribution kids’ lives when they decide to Save the World. This is very bad. Unless they really do get a deep, intrinsic satisfaction from maximizing expected global happiness, they’ll be in for a world of pain later on. After years of spinning their wheels, not getting anywhere, they’ll realize that they hate the whole principle they’ve built their life around. That, deep down, their truest passion doesn’t (and doesn’t have to) involve the number of people suffering malaria, the quantity of sentient shrimps being factory farmed, or how many trillion people could be happy in a way they aren’t 1000 years from now. I claim that scope insensitivity isn’t a bug. That there are no bugs when it comes to values. That you should care about exactly what you want to care about. That if you want to team up and save the world from AI or poverty or mortality, you can, but you don’t have to. You have the freedom to care about whatever you want and shouldn’t feel social guilt for not liking the same values everyone else does. Their values are just as meaningful (or meaningless) as yours. Peer pressure is an evolved strategy to elicit collaboration in goofy mesa-optimizers like humans, not an indication of some true higher virtue. 

Life is complex, and I really doubt that what you should care about can be boiled down to something so simple as quality-adjusted life-years. I doubt it can be boiled down at all. You should care about whatever you care about, and that probably won’t fit any neat moral templates an online forum hands you. It'll probably be complex, confused, and logically inconsistent, and I don't think that's a bad thing

Why do I care about this so much? Because I got stuck in exactly this trap at the ripe old age of 12, and it fucked me up good. I decided I’d save the world, because a lot of very smart people on a very cool site said that I should. That it would make me feel good and be good. That it mattered. The result? Years of guilt, unproductivity, and apathy. Ending up a moral zombie that didn’t know how to care and couldn’t feel emotion. Wondering why enlightenment felt like hell. If some guy promised to send you to secular heaven if you just let him fuck your wife, you’d tell him to hit the road. But people jump straight into the arms of this moral cuckoldry.  Choosing and caring about your values is a very deep part of human nature and identity, and you shouldn’t let someone else do it for you.

This advice probably sounds really obvious. But it wasn’t for me, so I hope it’ll help other people too. Don’t let someone else choose what you care about. Your values probably won’t look exactly like everyone else’s and they certainly shouldn’t feel like a moral imperative. Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway. As an optimizing agent in an incredibly nebulous and dark world, the best you can do is what you think is personally good. There are lots of equally valid goals to choose from. Infinitely many, in fact. For me, it’s curiosity and understanding of the universe. It directs my life not because I think it sounds pretty or prosocial, but because it’s tasty. It feels good to learn more and uncover the truth, and I’m a hell of a lot happier and more effective doing that than puttering around pretending to care about the exact count of humans experiencing bliss. There are lots of other values too. You can optimize anything that speaks to you - relationships, cool trains and fast cars, pure hedonistic pleasure, number of happy people in the world - and you shouldn’t feel bad that it’s not what your culty clique wants from you. This kind of “antisocial” freedom is pretty unfashionable, especially in parts of the alignment/EA community, but I think a lot more people think it than say it explicitly. There’s value in giving explicit permission to confused newcomers to not get trapped in moral chains, because it’s really easy to hurt yourself doing that.

Save the world if you want to, but please don’t if you don’t want to.

Comments2


Sorted by Click to highlight new comments since:

So I think that this post is best seen as an emotional declaration, rather than an argument or reasoned case for/against EA. I'm going to split up my response into two comments, one about the emotional side, I'll leave the more philosophical engagement for a second one.

All this to say, other people have gone what you've gone through. Some people have made a bargain with the totalising aspect of EA, others have just left the whole thing, while a few have become radicalised enemies of EA (looks like you might be that camp?). 

I hope that part of "Third Wave EA" can be about finally showing how naïve totalism isn't entailed, even instrumentally, by the movement, and also just about being more warm and welcoming and supportive to people, especially when they're struggling in the way that you were.

As for the object-level stuff in this post, I disagree with a lot of it (and I'll write that separately), but I don't think that's as important as what I wrote in this comment.

Regardless of anything else, and most importantly above all, I'm glad you're doing better.

[note this is a second response focusing on the arguments as I see them in the post, a lot of which I think are wrong. This is not as important as my first response on the tone/emotion, so please read that first]

Trying my best to understand the argument in this post, you seem to have become a "dinoman exteremist" (as Nick Cammarata would phrase it), where the only moral imperative that you accept is that "You should care about whatever you care about." and reject any other moral imperative or reasoning as valid. You seem to contradict this by saying "There are lots of equally valid goals to choose from. Infinitely many, in fact." This might be true descriptively perhaps, but you aren't making a descriptive claim, you're clearly making a normative one. So if all moral goals are valid, then not caring about what you care about is also a valid thing to pursue, or "saving the world", even if you don't want to.

Perhaps a better understanding of what your saying is that people's life goals and moral values shouldn't be set externally, but come internally. I think, again, this is just what descriptively happens in every case? You seem to think, practically, that new EAs just delete what their moral values are and Ctrl+C/Ctrl+V  them with some community consensus, but that just empirically doesn't seem true? Not to say that there are no problems with deferral in the community, but mostly there's a lot of intra-EA disagreement about what to value, from non-human animals, to future-people, to digital sentience, to direct work, to improving systems and science and so on. In my own case, meeting EA ideas led me to reflect personally, deeply about what I cared about and whether I was actually living to uphold those values. So EA definitely doesn't seem incompatible with what you say here.

But going back to your 'all values are valid' claim, you should accept where that goes. If I value causing animals pain and harm, so that I want to work with factory farms to make sure animals penned in cages suffer the most exquisite and heightened form of suffering, cenobite-style, you really think you have no reason that such a value is no better than caring for other humans without reference to their geographical location, or caring for own family and friends? I could pick even worse examples than this, and maybe you are really happy to accept what 'anything goes' means, but I don't think you do, and it's basically a dispositive result of trying to take that moral rule seriously.

This is linked into the bit where you say "Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway." where instead of arguing for choosing values such as individual flourishing you're just shrugging your shoulders and saying that the fact that life is short makes life not meaningful? It honestly, and I'm sorry if this comes off as rude, reminds me of a smart-ish teenager who's just discovered nihilism for the first time. I think the older I get, the more I think all of it matters. Just because the heat death of universe seems inevitable, it doesn't make anything we do now less valuable. No matter what happens in the future or has happened in the past, the good I can do now will always have happened, and will always have meaning.

Finally, you just make a ton of aspersions about EA in this piece, which reading between the lines seems to be about AI-Safety maximalism? But when you right "Doing things because they sound nice and pretty and someone else says they’re morally good" as if that is a description of what's going on, again, that just doesn't match up with the world I live in, or the EAs I know or am aware of. But it really seems like you're not interested in accuracy, as opposed to getting pain off your chest. In particular, the anger comes across when you use a bunch of nasty phrases - like referring to EAs  as "cucks", "RLHF-ed chatbots", part of a "culty clique" and so on. Perhaps this is understandble as a reaction against/rejection of something you experienced as a damaging, totalising philosophy, but boy did it leave a really bad impression.

To be very clear about the above point - You leaving/not-liking EA doesn't make me think less of you, but being needlessly cruel to others in such a careless way does.

To be frank, I don't think your empirical points are accurate, and your philosophical points are confused. I think you have a lot of personal healing to do first before you can view these issues accurately, and to that end I hope you find a path in life that allows you to heal and flourish without harming yourself or others.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f