I don’t want to save the world. I don’t want to tile the universe with hedonium. I don’t want to be cuckolded by someone else’s pretty network-TV values. I don’t want to do anything I don’t want to do, and I think that’s what (bad) EAs, mother Teresa, and proselytizing Christians all get wrong. Doing things because they sound nice and pretty and someone else says they’re morally good suuucks. Who even decided that warm fuzzies, QALYs, or shrimp lives saved are even good axes to optimize? Because surely everyone doesn’t arrive at that conclusion independently. Optimizing such universally acceptable, bland metrics makes me feel like one of those blobby, soulless corporate automata in bad tech advertisements.

I don’t see why people obsess over the idea of universal ethics and doing the prosocial thing. There’s no such thing as the Universal Best Thing, and professing the high virtue of maximizing happiness smacks of an over-RLHFed chatbot. Altruism might be a “virtue”, as in most people’s evolved and social environments cause them to value it, but it doesn’t have to be. The cosmos doesn’t care what values you have. Which totally frees you from the weight of “moral imperatives” and social pressures to do the right thing.

There comes a time in most conscientious, top-of-distribution kids’ lives when they decide to Save the World. This is very bad. Unless they really do get a deep, intrinsic satisfaction from maximizing expected global happiness, they’ll be in for a world of pain later on. After years of spinning their wheels, not getting anywhere, they’ll realize that they hate the whole principle they’ve built their life around. That, deep down, their truest passion doesn’t (and doesn’t have to) involve the number of people suffering malaria, the quantity of sentient shrimps being factory farmed, or how many trillion people could be happy in a way they aren’t 1000 years from now. I claim that scope insensitivity isn’t a bug. That there are no bugs when it comes to values. That you should care about exactly what you want to care about. That if you want to team up and save the world from AI or poverty or mortality, you can, but you don’t have to. You have the freedom to care about whatever you want and shouldn’t feel social guilt for not liking the same values everyone else does. Their values are just as meaningful (or meaningless) as yours. Peer pressure is an evolved strategy to elicit collaboration in goofy mesa-optimizers like humans, not an indication of some true higher virtue. 

Life is complex, and I really doubt that what you should care about can be boiled down to something so simple as quality-adjusted life-years. I doubt it can be boiled down at all. You should care about whatever you care about, and that probably won’t fit any neat moral templates an online forum hands you. It'll probably be complex, confused, and logically inconsistent, and I don't think that's a bad thing

Why do I care about this so much? Because I got stuck in exactly this trap at the ripe old age of 12, and it fucked me up good. I decided I’d save the world, because a lot of very smart people on a very cool site said that I should. That it would make me feel good and be good. That it mattered. The result? Years of guilt, unproductivity, and apathy. Ending up a moral zombie that didn’t know how to care and couldn’t feel emotion. Wondering why enlightenment felt like hell. If some guy promised to send you to secular heaven if you just let him fuck your wife, you’d tell him to hit the road. But people jump straight into the arms of this moral cuckoldry.  Choosing and caring about your values is a very deep part of human nature and identity, and you shouldn’t let someone else do it for you.

This advice probably sounds really obvious. But it wasn’t for me, so I hope it’ll help other people too. Don’t let someone else choose what you care about. Your values probably won’t look exactly like everyone else’s and they certainly shouldn’t feel like a moral imperative. Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway. As an optimizing agent in an incredibly nebulous and dark world, the best you can do is what you think is personally good. There are lots of equally valid goals to choose from. Infinitely many, in fact. For me, it’s curiosity and understanding of the universe. It directs my life not because I think it sounds pretty or prosocial, but because it’s tasty. It feels good to learn more and uncover the truth, and I’m a hell of a lot happier and more effective doing that than puttering around pretending to care about the exact count of humans experiencing bliss. There are lots of other values too. You can optimize anything that speaks to you - relationships, cool trains and fast cars, pure hedonistic pleasure, number of happy people in the world - and you shouldn’t feel bad that it’s not what your culty clique wants from you. This kind of “antisocial” freedom is pretty unfashionable, especially in parts of the alignment/EA community, but I think a lot more people think it than say it explicitly. There’s value in giving explicit permission to confused newcomers to not get trapped in moral chains, because it’s really easy to hurt yourself doing that.

Save the world if you want to, but please don’t if you don’t want to.





More posts like this

Sorted by Click to highlight new comments since: Today at 3:04 PM

So I think that this post is best seen as an emotional declaration, rather than an argument or reasoned case for/against EA. I'm going to split up my response into two comments, one about the emotional side, I'll leave the more philosophical engagement for a second one.

All this to say, other people have gone what you've gone through. Some people have made a bargain with the totalising aspect of EA, others have just left the whole thing, while a few have become radicalised enemies of EA (looks like you might be that camp?). 

I hope that part of "Third Wave EA" can be about finally showing how naïve totalism isn't entailed, even instrumentally, by the movement, and also just about being more warm and welcoming and supportive to people, especially when they're struggling in the way that you were.

As for the object-level stuff in this post, I disagree with a lot of it (and I'll write that separately), but I don't think that's as important as what I wrote in this comment.

Regardless of anything else, and most importantly above all, I'm glad you're doing better.

[note this is a second response focusing on the arguments as I see them in the post, a lot of which I think are wrong. This is not as important as my first response on the tone/emotion, so please read that first]

Trying my best to understand the argument in this post, you seem to have become a "dinoman exteremist" (as Nick Cammarata would phrase it), where the only moral imperative that you accept is that "You should care about whatever you care about." and reject any other moral imperative or reasoning as valid. You seem to contradict this by saying "There are lots of equally valid goals to choose from. Infinitely many, in fact." This might be true descriptively perhaps, but you aren't making a descriptive claim, you're clearly making a normative one. So if all moral goals are valid, then not caring about what you care about is also a valid thing to pursue, or "saving the world", even if you don't want to.

Perhaps a better understanding of what your saying is that people's life goals and moral values shouldn't be set externally, but come internally. I think, again, this is just what descriptively happens in every case? You seem to think, practically, that new EAs just delete what their moral values are and Ctrl+C/Ctrl+V  them with some community consensus, but that just empirically doesn't seem true? Not to say that there are no problems with deferral in the community, but mostly there's a lot of intra-EA disagreement about what to value, from non-human animals, to future-people, to digital sentience, to direct work, to improving systems and science and so on. In my own case, meeting EA ideas led me to reflect personally, deeply about what I cared about and whether I was actually living to uphold those values. So EA definitely doesn't seem incompatible with what you say here.

But going back to your 'all values are valid' claim, you should accept where that goes. If I value causing animals pain and harm, so that I want to work with factory farms to make sure animals penned in cages suffer the most exquisite and heightened form of suffering, cenobite-style, you really think you have no reason that such a value is no better than caring for other humans without reference to their geographical location, or caring for own family and friends? I could pick even worse examples than this, and maybe you are really happy to accept what 'anything goes' means, but I don't think you do, and it's basically a dispositive result of trying to take that moral rule seriously.

This is linked into the bit where you say "Choose values that sound exciting because life’s short, time’s short, and none of it matters in the end anyway." where instead of arguing for choosing values such as individual flourishing you're just shrugging your shoulders and saying that the fact that life is short makes life not meaningful? It honestly, and I'm sorry if this comes off as rude, reminds me of a smart-ish teenager who's just discovered nihilism for the first time. I think the older I get, the more I think all of it matters. Just because the heat death of universe seems inevitable, it doesn't make anything we do now less valuable. No matter what happens in the future or has happened in the past, the good I can do now will always have happened, and will always have meaning.

Finally, you just make a ton of aspersions about EA in this piece, which reading between the lines seems to be about AI-Safety maximalism? But when you right "Doing things because they sound nice and pretty and someone else says they’re morally good" as if that is a description of what's going on, again, that just doesn't match up with the world I live in, or the EAs I know or am aware of. But it really seems like you're not interested in accuracy, as opposed to getting pain off your chest. In particular, the anger comes across when you use a bunch of nasty phrases - like referring to EAs  as "cucks", "RLHF-ed chatbots", part of a "culty clique" and so on. Perhaps this is understandble as a reaction against/rejection of something you experienced as a damaging, totalising philosophy, but boy did it leave a really bad impression.

To be very clear about the above point - You leaving/not-liking EA doesn't make me think less of you, but being needlessly cruel to others in such a careless way does.

To be frank, I don't think your empirical points are accurate, and your philosophical points are confused. I think you have a lot of personal healing to do first before you can view these issues accurately, and to that end I hope you find a path in life that allows you to heal and flourish without harming yourself or others.

Curated and popular this week
Relevant opportunities