Hide table of contents

I sat in a restaurant in New York, for example, and I looked out at the buildings and I began to think, about how much the radius of the Hiroshima bomb damage was. How far from here was 34th street?... All those buildings, all smashed. And I would go along and I would see people building a bridge, or they'd be making a new road, and I thought, they're crazy, they just don't understand, they don't understand. Why are they making new things? It's so useless.

Richard Feynman

 

Intro

I am a psychotherapist helping people working on AI safety. In my post on non-obvious mental health issues among AI safety community members. I wrote that some people believe that AGI will soon cause either doom or utopia, which makes and every action with long-term goals useless. So there is no reason to do things like making long-term investments or maintaining a good health.

 

Meaninglessness causes depression

Meet Alex. He is a ML researcher at a startup developing anti-aging drugs. Recently Alex got interested in AI safety and realized that we are rapidly approaching AGI. He started thinking "AGI will either destroy humanity, or it will develop anti-aging drugs way better than we do. In both cases my work is useless". He loses any motivation to work, and after some thoughts he decides to quit. Fortunately, he has enough investments , so he can maintain his way of life without salary.

The more Alex was thinking about AGI, the deeper he dived into existential thoughts about meaninglessness of his actions. 

Before that he regularly ran. He did it because he felt good doing it, and also to be healthy. He started thinking more and more about meaninglessness in maintaining for long-term health. As he lost a part of his motivation, he had to force himself to run, so he could easily find an excuse for why he should stay at home and watch Netflix instead. Eventually stopped running completely. Also, while running, Alex had a habit of listening educational podcasts. As he stopped running, he also stopped listening them. At one point while flossing his teeth he got a thought "Does it even make sense to floss my teeth? Do I need to care about what will happen to my teeth in 20 years?"

Alex's life slowly became less and less interesting. He couldn't answer himself why bother doing stuff that previously fulfilled his life, which made him more and more depressed.

 

The universe is meaningless

To be fair, life ultimately havs no objective meaning, even if not taking AGI into account. People are just products of random mutations and natural selection which optimize for the propagation of genes through generations, and everything we consider meaningful, like friendship or helping others, are just proxy goals to propagate genes.

The problem of meaninglessness is not new. 
 

Leo Tolstoy, for example, struggled with this problem so much that it made him suicidal: 

What will be the outcome of what I do today? Of what I shall do tomorrow? What will be the outcome of all my life? Why should I live? Why should I do anything? Is there in life any purpose which the inevitable death which awaits me does not undo and destroy?

These questions are the simplest in the world. They are in the soul of every human being. Without an answer to them, it is impossible for life to go on.

I could give no reasonable meaning to any actions of my life.  And I was surprised that I had not understood this from the very beginning.

Behold me, hiding the rope in order not to hang myself; behold me no longer going shooting, lest I should yield to the too easy temptation of putting an end to myself with my gun.


The good news is that many smart people came up with decent ideas on how to deal with this existential meaninglessness. The rest of the post is about finding meaning in the meaningless world.

 

Made-up meaning works just fine

Imagine 22 people with a ball on a grass field, and no one told them what they should do. These people would probably just sit doing nothing and waiting for all this to end.

Now imagine that someone gave these people instructions to play football and win the match. Now they focus on a result, experience emotional dramas, and form bonds with teammates.

The rules of football are arbitrary. There is no law of nature that states that you can only kick a ball with legs but not hands, and that you have to put this ball into a net. Someone just came up with these rules, and people have a good time following them.

Let's describe this situation with fancy words.

 

Nihilism

Nihilism is a philosophy that states there is no objective meaning in life, and you can't do anything about it. It might be technically true, but this is a direct path to misery. The guys with a ball and no rules had a bad time, and Alex's life without meaning started falling apart.

 

Existentialism

The solution for the problem of meaninglessness is found in existentialism. Its core idea is even if there is no objective meaning, the made-up one works just fine and makes life better. Just like people playing football with artificial rules are having a good time.

The good thing is that our brains are hardwired to create meaning. We also know the things that our brains are prone to consider meaningful, so with some effort, people can regain a sense of meaningfulness. 

Let's dive deeper into this. 

 

People with terminal illnesses sometimes have surprisingly meaningful lives


At one point I provided psychological support for people with terminal cancer. They knew that they only had pain and death ahead, and their loved ones suffered too. 

Counterintuitively, some of them found a lot of meaning in their situation. As they and their close ones suffer, it became obvious that it's important to reduce this suffering. This is a straightforward source of meaning.

  • My clients knew that their close ones would probably be emotionally devastated after their death. Sometimes financially too. So they found meaning in helping their family members to have a good life, and making sure they will remember them with smile.
  • As people with cancer suffer, they become aware of the suffering of their peers, so they find a lot of meaning in helping others who struggle with similar problems. Cancer survivors often volunteer helping people with cancer to live through it and find it deeply meaningful. 
  • As a therapist, I personally experience more sense of meaning helping people who have a short and painful life ahead. I feel like every moment they don't suffer is exceptionally valuable.

 

So, how to find meaning in the world where AGI might make everything else meaningless?

Let's return to our hero Alex who believes that his life became meaningless because in the face of AGI. Let's see a couple of examples on how he can regain sense of meaning in his life.

 

Meaning in emotional connections

Alex has a brother, but after a serious conflict they didn't talk for several years. They were good friends when they were kids. They grew-up together and share a lot of experience. They know each other like nobody else, but after their mother died, they had an ugly fight over her inheritage. Alex realizes that he deeply regrets this conflict and decides to reconnect with his brother.

Turnes out, the brother also regrets their conflict and is happy to finally meet Alex. Now they are happy that they again have their emotional bond, and Alex find a lot of meaning in investing his time and effort into this friendship.

 

Meaning in making a purposeful work

Alex has short timelines, and believes humanity don't have much time, but he realizes that regardless of that, there are people who are suffering right now.
 
Some people are homeless. Some have ilnessess. Some are lonely, and even if AGI is near, these people still need help now 

Alex decided to start volunteering as a social worker, helping homeless people to get a job, find a place to live, and helping with their health problems. He sees that his work helps people to live better lives, and that every time he thinks of this work, he believes that he makes something good and meaningful. 

 

Epilogue

If you struggle with the sense of meaninglessness due to AGI, and believe that you might benefit from professional help, then I might help as a therapist or suggest other places where you can get professional help. 

​Check out my profile description to learn more about these options.

Comments6


Sorted by Click to highlight new comments since:

Thanks

I wrote you a couple weeks ago about scalable mental health, and then I went silent.

I am sorry about that. I'm kinda in between projects right now and I'm waiting for things to become more certain in order to have meaningful talks about them.

He did it because he felt good doing it, and also to be healthy. He started thinking more and more about meaninglessness in maintaining for long-term health.

I think it's also helpful to point out that we should be good Bayesians and not believe anything 100%. It seems to me plausible that in 20 years, AI may not change everything, but maybe we will be able to reverse aging (or maybe AI will change everything and we can upload our brains). With some chance of indefinite lifespan, I think some effort into health in the next 20 years even if one is relatively young could have a big expected value.

I appreciate the post, though I think "The universe is meaningless" section wasn't so convincing. The universe is meaningless because we're the product of natural selection? I would want a better argument than that.

Thanks for writing this! When I read the title I first thought the article would be about arguing that other cause areas are also important despite some people acting as if AI makes other causes unimportant. I'm glad I clicked on it anyway!

Glad that the article was valuable for you.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin
Recent opportunities in AI safety
20
Eva
· · 1m read
14
Ryan Kidd
·