Hide table of contents

People often cling to their current identity as if the status quo was the best possible version of themselves. For example, they think they are “a person that is just bad at math” but don’t intend to change that since they have formed an identity around it. This even goes even further. When people are offered the hypothetical to gain 50 IQ with no strings attached some reject it because “they wouldn’t be the same”. 
Whenever I realize something like that in myself, I feel caught red-handed and ashamed. Rationally, I don’t want to be a person that forms an identity around a suboptimal version of myself but realistically I often do. 

I think this tendency is suboptimal for future versions of ourselves, our surroundings and society at large. Thus, I want to create a strong mantra and emotional commitment against it. I don’t want to cling to my current identity.

I want to be replaced!

Epistemic status: Motivational (inspired by John Wentworth)

By a better future self

I don’t want to be a person that forms an identity around their current flaws and laughs them off like “I’m just the person who is bad at writing” like it was some inherent feature of mine. I want to be replaced by a future better version of myself, somebody that is similar to me in most aspects but a better writer. 

I think this intuition works especially well with a Ship of Theseus-like, continuous flowy version of identity. The cells our bodies consist of are replaced multiple times within our lifetime, our bodies and brain change pretty drastically throughout childhood, puberty and adulthood, and yet, we always build an identity around the current version of ourselves. Why not accept the fact that we change and form an identity that welcomes better versions of the status quo rather than rejects them? 

I want to be replaced by my future better self!

By a better partner

Some relationship decisions are driven by the fear of losing our partners, independent of the consequences for others. For example, if Bob is in a relationship with Alice but Alice realizes that she likes Chakresh better, Bob will traditionally fight for her and try to prevent her from being with Chakresh. After a while, the dust settles and everyone is unhappy. Bob is unhappy because he can’t make Alice as happy as she could be, and Alice+Chakresh are unhappy because they are not with each other. 

The entire situation could be so much better if Bob just allowed Alice to be with Chakresh. Clearly, this goes against most social norms and instincts and yet it seems better in the long term to accept when you are a suboptimal fit. Furthermore, the easier it is for you to accept the situation, the less damage is done to everyone involved. I don’t want to be Bob.

I want to be replaced by a better partner!

By a better employee

People cling to their jobs as if it’s the only thing they can ever do. We hate to be fired, we hate being demoted and we hate it when other people are promoted while we are “the next in line”. For the economy at large, it is good when inefficient jobs are cut, bad employees are demoted and good ones promoted. Even for ourselves, it’s usually not as bad as we assume. If a better person gets promoted, that will likely benefit the company at large and thus us. If we get fired for bad work, we might not have been a perfect fit to begin with*. In the long run, it often means finding a better-fitting position or is better for society.

I want to be replaced by a better employee.

*I know that there are people in less privileged positions for whom this mantra doesn’t make any sense.

By a machine

Historically people have really disliked losing their job to automation. The weavers rioted against the introduction of the power loom and the coach industry lobbied against the car. And yet, from most other perspectives, automation has been a wild success. People have to do less risky and less monotonous jobs and are more productive.

So rather than being sad about the short-term loss, I should look forward to humanities’ gain and possibly more exciting opportunities when I’ll be replaced.

I want my job to be replaced by a machine.

By better humans

When I talk to people about future generations that could be vastly healthier, happier and more intelligent than we are, I often get responses along the lines of “Hmmm not sure, they are so … different”. And if I asked a person in the middle ages about a person in 2020 who lives for 80 years, is healthier and can communicate with someone in Australia in real-time, they would probably respond “Hmmm not sure, they are so … different”. Also, they wouldn’t know what Australia is. 
It is as if people believe that the current state of progress was the optimum for some peculiar coincidence. 

I look forward to future generations having better lives than we do and I’m happy if we get there faster. 

I want to be replaced by better humans!

By a simulation

The physical world has limits that simulations can ignore. While there is limited space, food, water, metals, etc. on earth, we can basically have as much as we want in a simulation. Furthermore, our bodies have limits. We can’t fly, we can’t breathe underwater and our happiness is limited to a small part of the possible spectrum by our biology. 

When people think of simulations, they often think of rats on heroin or matrix-like brain-in-a-vat scenarios. I prefer to think of a San Junipero where people have fun, nobody has to fight for resources, diseases don’t exist and no one needs to work if they don’t want to. A good simulation isn’t cold, it should feel like the best imaginable holiday forever. 

I want to be replaced by a simulation!

Last words

Don’t get me wrong, most of these suck in the short term. It sucks to admit your weaknesses, it sucks to lose your partner, it sucks to get fired or demoted and it sucks to be automated away. It is also sad to realize that future sentient beings, whether on a carbon or silicon basis, might lead much better lives. 
And yet, I don’t want to be bitter about these things. I want to accept that they are better for my future self, people I care about and society at large. 

I want to be replaced!


 

Comments8


Sorted by Click to highlight new comments since:

I certainly want to be replaced by better AI Safety researchers (or any other worker in an important area) so that I don't have to make the personal sacrifice to work on them. I still put a lot of effort in being the best, but secretly wish there is someone better to do the job. Funny. Also, a nice excuse to celebrate rejection if you apply to an EA job.

I don't think it's just a "nice excuse", I think it makes sense to celebrate if you got rejected by an EA org. The work you wanted to do to help the world is being done better than you could have done it  (assuming their application system works well enough). And you don't even have to lift a finger. That is not to say that I would predict myself to react in a very positive way immeadiately after rejection, but it's how I'd want myself to react.

Strongly agree, but I want to emphasize something. The word 'better' is doing a lot of work here.

I want to be replaced by my better future self, but not my future self who is great at rationalizing their decisions.

I want to be replaced by a better partner, but not by someone who is great at manipulating people into a relationship.

I want to be replaced by a better employee, but not by one who is great at getting the favor of the manager.

I want to be replaced by a machine which can do my job better, but not by an unaligned AGI.

I want to be replaced by better humans, but not by richer humans if they are lonely and depressed.

I want to be replaced by a simulation that feels like the best holiday ever, but not by a contract drafting em.

I want to be replaced, if and only if I'm being replace by something that is, in a very precise sense, better. If the process that will replace me does not share my values, then I want to replace it with one that does.

Hi, Marius. 
If, while writing this post, you had wished that it would deeply influence even one reader – congrats. 
After listening to this post about three times through the Non-Linear Library, and reading it another couple times here, “I want to be replaced” has been on my mind since. 

After a couple months of deliberation, I am finally ready to provide my ten-cent commentary. It will be mostly around the “By a better partner” and “By a better employee” sections. 

First and foremost, I will explain why this has been so influential for me.
As a Buddhism enthusiast, this is on par with one of the most fundamental Buddhist principles – attachment as a lead source of suffering, and in many times a prerequisite for suffering. 

When a woman whom I am dating breaks up with me, my suffering stems from several attachments –  my attachment to this woman, my attachment to my wishful-thinking that she loves me, my attachment to my thoughts which have already been simulated about building a future with her, and more. Obviously, suffering, such in my example, stems not only from attachment, but also from rejection, but no reason to dive into that here. In your post, you use the word “cling”, which is very much parallel to the word attachment in this context. This text helps me soften my attachment, helps me not to cling so hard to my ideal-self beliefs. So thank you for writing it. You expand my perspective about suffering, specifically about utilitarian benefits of it. This woman, who would sooner or later date someone else, could be happier with him than what she was with me. The man whom she would date, could potentially be happier than where he is right now. I could be happy for both of them, and could also be happier myself later in the future with someone who would have more affection for me. Plus, if the original woman would have stayed with me, but would be unsatisfied in the relationship, her unhappiness would quickly trickle down to myself. In your words, “I want to be replaced by a better partner!”

The problem is that this scenario is the ultimate best-case scenario, which is never guaranteed. In the long run, everybody’s happy, and this do not give us much dilemma or conflict. However, in both my and your relationship example, we could end up with the following scenario – She finds someone else, they are both happy, yet the protagonist remains single. For me, this is the conflict, and I believe this should be emphasized in your post. Considering this, should the protagonist still declare “I want to be replaced!”, even though he could remain single? For me, this is where it gets interesting.

It seems that your resolution for this conflict is that despite the protagonist possibly remaining single, he still ought to declare “I want to be replaced!”. Leaning on utilitarianism, the net happiness of humanity is still expected to be higher, even taking into consideration a possible worst-case scenario for the protagonist –  remaining single. Two (and potentially more) happy people are better than one. 

However, I view this as an affective forecasting mistake – the attempt to predict our emotions, a game we are awfully flawed playing. 

Despite my Buddhist inspirations, for some things I am heavily attached to, I simply cannot bring myself to declare “I want to be replaced!”. One year ago on this day, and after tons of hard work, I was accepted into a a graduate degree which is objectively difficult to get into due to a low acceptance ratio. For this reason, it is extremely likely that there is that one guy who did not get in, but has a greater potential than me to be better at the profession later. We only need one guy for this conflict to be relevant. Should I had declared “I want to be replaced!”, and offered my spot for this guy, would I had the chance? I would say yes only if I could, respectively, replace someone else whom I have greater potential for the profession than him. Thus, resulting with me + the guy better than me getting in, with the least competent candidate losing his spot (assuming the least competent candidate is not myself). But this, again, is the best-case scenario, with no conflict. 

The conflict only comes alive if I do not get to replace the least competent candidate, and this is what my comment is all about. Should I declare “I want to be replaced”, so that the first guy replaces me? While I can argue using utilitarianism that humanity is expected to be better off if that one candidate who is better than me would be accepted instead of myself, I would never declare “I want to be replaced!”. No way. 

I admit this being a egocentric move, almost by definition. Yet it is still crystal clear for me that I would not want to be replaced, and I do not feel guilty about admitting this. My urge to take care of myself, and my future-self is too strong for me to give up my spot

On the contrary, I would gladly declare “I want to be replaced!” for things I am less attached to. If I manage to buy a ticket to an oversubscribed concert, but there is this super-fan out there who was less fortunate, I would be happy, thrilled, to sell him my ticket. But this is only possible because I am less attached. 

In conclusion, I want to be replaced is a mentality which inspires me, and I would love being more like this. However, I’m afraid this mentality could be applied only for things in which we have little to moderate attachment to. Would love to hear your thoughts. 

I think that while this is hard, the person I want to be would want to be replaced in both cases you describe. 
a) Even if you stay single, you should want to be replaced because it would be better for all three involved. Furthermore, you probably won't stay single forever and find a new (potentially better fitting) partner.
b) If you had very credible evidence that someone else was not hired who is much better than you, you should want to be replaced IMO. But I guess it's very implausible that you can make this decision better since you have way less information than the university or employer. So this case is probably not very applicable in real life.

Why not also strive to be the better replacement?

I think this is a very valuable comment. 
As someone who loves sports, the main reason we have such incredible talent is due to the players desire to replace each other all the time. Every minute they compete, another player must sit on the bench. Their desire is so deep, that we end up with extremely talented leagues,  which makes it so fun to watch. An "I want to be replaced" mindset might not motivate them to wake up at 6:00 to hit the gym. But what is true for professional athletes is also true for us. We also "hit the gym", all the time. To outperform others in exams, in job interviews, in dating, etc. We also strive to replace. Maybe the middle ground is striving to replace, but willing to let go and be replaced, when someone is just more fit than us for that one particular thing, like a certain job.  

This post inspired me a lot... Thank you. 

Curated and popular this week
 ·  · 5m read
 · 
This work has come out of my Undergraduate dissertation. I haven't shared or discussed these results much before putting this up.  Message me if you'd like the code :) Edit: 16th April. After helpful comments, especially from Geoffrey, I now believe this method only identifies shifts in the happiness scale (not stretches). Have edited to make this clearer. TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test rescaling using long-run German panel data, looking at whether the association between reported happiness and three “get-me-out-of-here” actions (divorce, job resignation, and hospitalisation) changes over time. * If people are getting happier (and rescaling is occuring) the probability of these actions should become less linked to reported LS — but they don’t. * I find little evidence of rescaling. We should probably take self-reported happiness scores at face value. 1. Background: The Happiness Paradox Humans today live longer, richer, and healthier lives in history — yet we seem no seem for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flatover the last few decades, even in countries like Germany, the UK, China, and India that have experienced huge GDP growth. As Michael Plant has written, the empirical evidence for this is fairly strong. This is the Easterlin Paradox. It is a paradox, because at a point in time, income is strongly linked to happiness, as I've written on the forum before. This should feel uncomfortable for anyone who believes that economic progress should make lives better — including (me) and others in the EA/Progress Studies worlds. Assuming agree on the empirical facts (i.e., self-reported happiness isn't increasing), there are a few potential explanations: * Hedonic adaptation: as life gets
 ·  · 38m read
 · 
In recent months, the CEOs of leading AI companies have grown increasingly confident about rapid progress: * OpenAI's Sam Altman: Shifted from saying in November "the rate of progress continues" to declaring in January "we are now confident we know how to build AGI" * Anthropic's Dario Amodei: Stated in January "I'm more confident than I've ever been that we're close to powerful capabilities... in the next 2-3 years" * Google DeepMind's Demis Hassabis: Changed from "as soon as 10 years" in autumn to "probably three to five years away" by January. What explains the shift? Is it just hype? Or could we really have Artificial General Intelligence (AGI)[1] by 2028? In this article, I look at what's driven recent progress, estimate how far those drivers can continue, and explain why they're likely to continue for at least four more years. In particular, while in 2024 progress in LLM chatbots seemed to slow, a new approach started to work: teaching the models to reason using reinforcement learning. In just a year, this let them surpass human PhDs at answering difficult scientific reasoning questions, and achieve expert-level performance on one-hour coding tasks. We don't know how capable AGI will become, but extrapolating the recent rate of progress suggests that, by 2028, we could reach AI models with beyond-human reasoning abilities, expert-level knowledge in every domain, and that can autonomously complete multi-week projects, and progress would likely continue from there.  On this set of software engineering & computer use tasks, in 2020 AI was only able to do tasks that would typically take a human expert a couple of seconds. By 2024, that had risen to almost an hour. If the trend continues, by 2028 it'll reach several weeks.  No longer mere chatbots, these 'agent' models might soon satisfy many people's definitions of AGI — roughly, AI systems that match human performance at most knowledge work (see definition in footnote). This means that, while the compa
 ·  · 4m read
 · 
SUMMARY:  ALLFED is launching an emergency appeal on the EA Forum due to a serious funding shortfall. Without new support, ALLFED will be forced to cut half our budget in the coming months, drastically reducing our capacity to help build global food system resilience for catastrophic scenarios like nuclear winter, a severe pandemic, or infrastructure breakdown. ALLFED is seeking $800,000 over the course of 2025 to sustain its team, continue policy-relevant research, and move forward with pilot projects that could save lives in a catastrophe. As funding priorities shift toward AI safety, we believe resilient food solutions remain a highly cost-effective way to protect the future. If you’re able to support or share this appeal, please visit allfed.info/donate. Donate to ALLFED FULL ARTICLE: I (David Denkenberger) am writing alongside two of my team-mates, as ALLFED’s co-founder, to ask for your support. This is the first time in Alliance to Feed the Earth in Disaster’s (ALLFED’s) 8 year existence that we have reached out on the EA Forum with a direct funding appeal outside of Marginal Funding Week/our annual updates. I am doing so because ALLFED’s funding situation is serious, and because so much of ALLFED’s progress to date has been made possible through the support, feedback, and collaboration of the EA community.  Read our funding appeal At ALLFED, we are deeply grateful to all our supporters, including the Survival and Flourishing Fund, which has provided the majority of our funding for years. At the end of 2024, we learned we would be receiving far less support than expected due to a shift in SFF’s strategic priorities toward AI safety. Without additional funding, ALLFED will need to shrink. I believe the marginal cost effectiveness for improving the future and saving lives of resilience is competitive with AI Safety, even if timelines are short, because of potential AI-induced catastrophes. That is why we are asking people to donate to this emergency appeal