Great. Another crucial consideration I missed. I was convinced that working on reducing the existential risk for humanity should be a global priority.

Upholding our potential and ensuring that we can create a truly just future seems so wonderful.

Well, recently I was introduced to the idea that this might actually not be the case. 

The argument is rooted in suffering-focused ethics and the concept of complex cluelessness. If we step back and think critically, what predicts suffering more than the mere existence of sentient beings—humans in particular? Our history is littered with pain and exploitation: factory farming, systemic injustices, and wars, to name just a few examples. Even with our best intentions, humanity has perpetuated vast amounts of suffering.

So here’s the kicker: what if reducing existential risks isn’t inherently good? What if keeping humanity alive and flourishing actually risks spreading suffering further and faster—through advanced technologies, colonization of space, or systems we can’t yet foresee? And what if our very efforts to safeguard the future have unintended consequences that exacerbate suffering in ways we can't predict?

I was also struck by the critique of the “time of perils” assumption. The idea that now is a uniquely critical juncture in history, where we can reduce existential risks significantly and set humanity on a stable trajectory, sounds compelling. But the evidence supporting this claim is shaky at best. Why should we believe that reducing risks now will have lasting, positive effects over millennia—or even that we can reduce these risks at all, given the vast uncertainties?

This isn’t to say existential risk reduction is definitively bad—just that our confidence in it being good might be misplaced. A truly suffering-focused view might lean toward seeing existential risk reduction as neutral at best, and possibly harmful at worst.

It’s humbling, honestly. And frustrating. Because I want to believe that by focusing on existential risks, we’re steering humanity toward a better future. But the more I dig, the more I realize how little we truly understand about the long-term consequences of our actions.

So, what now? I’m not sure. 

I am sick of missing crucial considerations. All I want to do is to make a positive impact. But no. Radical uncertainty it is.

I know that this will potentially cost me hundreds of hours to fully think through. It is going to cost a lot of energy if I pursue with this.

Right now I am just considering to pursue earning to give instead and donate a large chunk of my money to different worldviews and cause areas.

Would love to get your thoughts.

22

3
2

Reactions

3
2
Comments9


Sorted by Click to highlight new comments since:

Hi there :) I very much sense that a conversation with me last weekend at EAGxVirtual is causally connected to this post, so I thought I'd share some quick thoughts!

First, I apologize if our conversation led you to feel more uncertain about your career in a way that negatively affected your well-being. I know how subjectively "annoying" it can be to question your priorities.

Then, I think your post raises three different potential problems with reducing x-risks (the three of which I know we've talked about) worth disentangling:

1. You mention suffering-focused ethics and reasons to believe these advise against x-risk reduction. 

2. You also mention the problem of cluelessness, which I think is worth dissociating. I think motivations for cluelessness vis-a-vis the sign of x-risk reduction are very much orthogonal to suffering-focused ethics. I don't think someone who rejects suffering-focused ethics should be less clueless. In fact, one can argue that they should be more agnostic about this while those endorsing suffering-focused ethics might have good reasons to at least weakly believe x-risk reduction hurts their values, for the "more beings -> more suffering" reason you mention. (I'm however quite uncertain about this and sympathetic to the idea that those endorsing suffering-focused ethics should maybe be just as clueless.)

3. Finally, objections to the 'time of perils' hypothesis can also be reasons to doubt the value of x-risk reduction (Thorstad 2023), but for very different reasons. It's purely a question of what is the most "impactable" between x-risks (and maybe other longterm causes) and shorter-term causes, rather than a question of whether x-risk reduction does more good than harm to begin with (like with 1 and 2).

Discussions regarding the questions raised by these three points seem healthy, indeed.

Hey Jim,

Thanks for chiming in, and you're spot on: our chat at EAGxVirtual definitely got the gears turning! No worries at all about the existential crisis, I see it as part of the journey (and I actively requested it) :) I actually think these moments of doubt are important to progress in my mission in EA (similarly laid out by JWS in his post). I usually don't do this, but the post was a good way for me to vent and help me process some of the ideas + get feedback.

You've broken down my jumbled thoughts really well. It is helpful to see the three points laid out like that. They each deserve their own space, and I appreciate you giving them that.

I think you're right that cluelessness is kind of its own beast, regardless of where one stands on suffering-focused ethics. 

Anyway, thanks for the thoughtful response and for helping me untangle my thoughts.

Thanks for sharing your thoughts. I'll respond in turn to what I think are the two main parts of it, since as you said this post seems to be a combination of suffering-focused ethics and complex cluelessness.

On Suffering-focused Ethics: To be honest, I've never seen the intuitive pull of suffering-focused theories, especially since my read of your paragraphs here seems to tend towards a lexical view where the amount of suffering is the only thing that matters for moral consideration.[1] 

Such a moral view doesn't really make sense to me, to be honest, so I'm not particularly concerned by it, though of course everyone has different moral intuitions so YMMV.[2] Even if you're convinced of SFE though, the question is how best to reduce suffering which hits into the clueless considerations you point out.

On complex cluelessness: On this side, I think you're right about a lot of things, but that's a good thing not a bad one!

  • I think you're right about the 'time of perils' assumption, but you really should increase your scepticism of any intervention which claims to have "lasting, positive effects over millennia" since we can't get the feedback on the millennia long impact of our interventions.
  • You are right that radical uncertainty is humbling, and it can be frustrating, but it is also the state that everyone is in, and there's no use beating yourself up for the default state that everyone is in.
  • You can only decide how to steer humanity toward a better future with the knowledge and tools that you have now. It could be something very small, and doesn't have to involve you spending hundreds of hours trying to solve the problems of cluelessness.

I'd argue that reckoning with the radical uncertainty should point towards moral humility and pluralism, but I would say that since that's the perspective in my wheelhouse! I also hinted at such considerations in my last post about a Gradient-Descent approach to doing good, which might be a more cluessness-friendly attitude to take.

  1. ^

    You seem to be asking e.g. "will lowering existential risk increase the expected amount of future suffering" instead of "will lowering existential risk increase the amount of total preferences satisfied/non frustrated" for example.

  2. ^

    To clairfy, this sentence specifically referred to lexical suffering views, not all forms of SFE that are less strong in their formulation

Thank you so much for posting this. This is something I worry about a lot but I’m terrible at explaining it. The way you explain it makes much more sense. Thank you. ❤️

It actually goes more giga brain than this - since aliens are in the picture, or even maybe life can re- evolve on our planet to interstellar intelligence. You might be interested to talk to @Arepo , he's a crucial considerer. I'd especially recommend his post "A proposed hierarchy of longtermist concepts". 


shameless self plugs that also might lead you to some related readings (i'm narcissistic enough to somehow remember almost all my comments on the subject)


https://forum.effectivealtruism.org/posts/zDJpYMtewowKXkHyG/alien-counterfactuals
 https://forum.effectivealtruism.org/posts/zLi3MbMCTtCv9ttyz/formalizing-extinction-risk-reduction-vs-longtermism
https://forum.effectivealtruism.org/posts/zuQeTaqrjveSiSMYo/?commentId=7s2vrDuxonBqoGrou
https://forum.effectivealtruism.org/posts/Pnhjveit55DoqBSAF/?commentId=wTkFestNWNorB5mG4
https://forum.effectivealtruism.org/posts/YnBwoNNqe6knBJH8p/?commentId=HPsgdWEbdEZH3WN6j
https://forum.effectivealtruism.org/posts/WebLP36BYDbMAKoa5/?commentId=cJdqyAAzwrL74x2mG

Either way, don't be down on yourself. I know exactly how you feel. There is way too much stuff to know. The fact that you are writing this and reflecting means you are one of the best humans alive right now, regardless of if x-risk is important or not. Keep up the good work.

First of all, thank you for engaging with this post! Your kind words and thoughtful pushback mean a lot to me. Over the last couple of weeks, I have been taking a break from many things to help me regain motivation and courage (hence, I am only starting to reply now). Fortunately, I am feeling much better again and ready to tackle the problem I am facing. Thank you once again, and I hope you have a great day!

If you think the expected value is negative regardless of what you can do or move, you should of course become the existential risk[1].

But, actually estimating whether humanity will be net-negative requires you to know what you value, which is something you're probably fuzzy about. We lack the technology so far to extract terminal goals from people, which you want to have before taking any irrevocable actions.

  1. ^

    Future-you might resent past-you for publicly doubting the merits of humanity, since I reckon you'd want to be a secret existential risk.

I would like to humbly suggest that people not engage in active plots to destroy humanity based on their personal back of the envelope moral calculations. 

I think that the other 8 billion of us might want a say, and I'd guess we'd not be particularly happy if we got collectively eviscerated because some random person made a math error. 

So we get to use cold hard rationality to tell most people that the stuff they are doing is relatively worthless compared to x-risk reduction, but when that same rationality argues that x-risk reduction is actually incredibly high variance and may very well be harming trillions of the people in the future we get to be humanists ?

Curated and popular this week
 ·  · 9m read
 · 
This is a Draft Amnesty Week draft. It may not be polished, up to my usual standards, fully thought through, or fully fact-checked.  Commenting and feedback guidelines:  I'm posting this to get it out there. I'd love to see comments that take the ideas forward, but criticism of my argument won't be as useful at this time, in part because I won't do any further work on it. This is a post I drafted in November 2023, then updated for an hour in March 2025. I don’t think I’ll ever finish it so I am just leaving it in this draft form for draft amnesty week (I know I'm late). I don’t think it is particularly well calibrated, but mainly just makes a bunch of points that I haven’t seen assembled elsewhere. Please take it as extremely low-confidence and there being a low-likelihood of this post describing these dynamics perfectly. I’ve worked at both EA charities and non-EA charities, and the EA funding landscape is unlike any other I’ve ever been in. This can be good — funders are often willing to take high-risk, high-reward bets on projects that might otherwise never get funded, and the amount of friction for getting funding is significantly lower. But, there is an orientation toward funders (and in particular staff at some major funders), that seems extremely unusual for charitable communities: a high degree of deference to their opinions. As a reference, most other charitable communities I’ve worked in have viewed funders in a much more mixed light. Engaging with them is necessary, yes, but usually funders (including large, thoughtful foundations like Open Philanthropy) are viewed as… an unaligned third party who is instrumentally useful to your organization, but whose opinions on your work should hold relatively little or no weight, given that they are a non-expert on the direct work, and often have bad ideas about how to do what you are doing. I think there are many good reasons to take funders’ perspectives seriously, and I mostly won’t cover these here. But, to
 ·  · 5m read
 · 
If you don’t typically engage with politics/government, this is the time to do so. If you are American and/or based in the U.S., reaching out to lawmakers, supporting organizations that are mobilizing on this issue, and helping amplify the urgency of this crisis can make a difference. Why this matters: 1. Millions of lives are at stake 2. Decades of progress, and prior investment, in global health and wellbeing are at risk 3. Government funding multiplies the impact of philanthropy Where things stand today (February 27, 2025) The Trump Administration’s foreign aid freeze has taken a catastrophic turn: rather than complying with a court order to restart paused funding, they have chosen to terminate more than 90% of all USAID grants and contracts. This stunningly reckless decision comes just 30 days into a supposed 90-day review of foreign aid. This will cause a devastating loss of life. Even beyond the immediate deaths, the long-term consequences are dire. Many of these programs rely on supply chains, health worker training, and community trust that have taken years to build, and which have already been harmed by U.S. actions in recent weeks. Further disruptions will actively unravel decades of health infrastructure development in low-income countries. While some funding may theoretically remain available, the reality is grim: the main USAID payment system remains offline and most staff capable of restarting programs have been laid off. Many people don’t believe these terminations were carried out legally. But NGOs and implementing partners are on the brink of bankruptcy and insolvency because the government has not paid them for work completed months ago and is withholding funding for ongoing work (including not transferring funds and not giving access to drawdowns of lines of credit, as is typical for some awards). We are facing a sweeping and permanent shutdown of many of the most cost-effective global health and development programs in existence that sa
 ·  · 3m read
 · 
Written anonymously because I work in a field where there is a currently low but non-negligible and possibly high future risk of negative consequences for criticizing Trump and Trumpism. This post is an attempt to cobble together some ideas about the current situation in the United States and its impact on EA. I invite discussion on this, not only from Americans, but also those with advocacy experience in countries that are not fully liberal democracies (especially those countries where state capacity is substantial and autocratic repression occurs).  I've deleted a lot of text from this post in various drafts because I find myself getting way too in the weeds discoursing on comparative authoritarian studies, disinformation and misinformation (this is a great intro, though already somewhat outdated), and the dangers of the GOP.[1] I will note that I worry there is still a tendency to view the administration as chaotic and clumsy but retaining some degree of good faith, which strikes me as quite naive.  For the sake of brevity and focus, I will take these two things to be true, and try to hypothesize what they mean for EA. I'm not going to pretend these are ironclad truths, but I'm fairly confident in them.[2]  1. Under Donald Trump, the Republican Party (GOP) is no longer substantially committed to democracy and the rule of law. 1. The GOP will almost certainly continue to engage in measures that test the limits of constitutional rule as long as Trump is alive, and likely after he dies. 2. The Democratic Party will remain constrained by institutional and coalition factors that prevent it from behaving like the GOP. That is, absent overwhelming electoral victories in 2024 and 2026 (and beyond), the Democrats' comparatively greater commitment to rule of law and democracy will prevent systematic purging of the GOP elites responsible for democratic backsliding; while we have not crossed the Rubicon yet, it will get much worse before things get better. 2. T