Hide table of contents

You can be at peace even when thinking the world is doomed. And while at peace you can still work against that Doom, even while being aware that nothing you do will make a difference. I believe there are states of mind like this that can be inhabited by humans.

Here I am not going to argue for imminent doom, or that nothing that you do matters. Rather, I want to point out that even when you believe in the dire circumstance of imminent unpreventable doom, it is possible to be at peace, even while working hard against the doom. Even while believing this to be futile. This is a possible state of mind for a human being.

And if it is possible, to be at peace, and work hard, even in this dire circumstance, it should be possible in any less dire circumstance too.


There are many games about how long can you survive, e.g. Dawn of War 2 the Last Stand, Serious Sam survival mode, and Project Zomboid. The very nature of these games is that you will soon die. And there is no saving. The difficulty will increase more and more until at some point you will get crushed.

But there are loads of people playing these games. Nothing about the impossibility of achieving victory seems to detract from the fun you can have. Would this really change if these games couldn't be restarted?

There is also the game You Only Live Once that you can only play once.

Do people not play these games? Do people not try hard when playing these games? No. To be fair, there is a big difference between AI doom and these games. In these games, you can make visible progress. The natural way to define success is to ask the questions: How long did you survive? Did you survive longer than last time?

This is where death with dignity and Duncan's advice is coming from, as far as I can tell. It's about redefining success as making as much progress as possible toward getting a good outcome, instead of directly aiming for a good outcome. Aiming to survive forever in Dawn of War 2 the last stand would probably be frustrating. You set out for a goal that you know is unachievable after all.

I think these strategies are valuable, though to me it seems they also miss something very basic.

Maybe this is a fluke and I will feel different soon, but today I felt like my expectation of doom did not influence me negatively. No negative qualia arose, generated by a heuristic in my brain, that "wants" to steer me away from executing a futile plan.

I didn't achieve this by pushing the doominess out of my mind, or by redefining success as getting as far as possible (getting as much dignity as possible). Instead, I was in a state of peace while contemplating the doom, with the relevant considerations plainly laid out in my mind. I think to achieve this you need to stop wanting the doominess to go away. And you need to stop grasping for straws of hope.

This might sound bleak, but the resulting first-person experience that you get is the opposite. There is no more aversion and craving arising. And giving up these negative emotions doesn't need to imply that you stop working on preventing the doom. Being in a state of frantic, continuous panic isn't actually that great for productivity anyway.

When I'm talking about giving up hope and giving up the craving, for wanting the world to be better, I'm talking about silencing the emotional components of your mind. I am not saying anything about changing your consequentialist, conscious reasoning. Mine is still targeted at making the biggest cumulative contribution that I can make, for preventing the doom. There is no contradiction here. In my model, the consequentialist reasoning component of your mind is separate from all of these heuristic algorithms that compute feelings that consequently arise in your consciousness, having a positive or negative valence associated with them, and steer you in particular ways.

Well, I don't really think I have done a good job (or any job whatsoever) of conveying how I managed to do this. I think the fact that I can do this is related to meditation. For example, in the Waking Up app, Sam Harris sometimes gives explicit instructions to "give up the struggle", and I think I just intuitively managed to successfully apply this learned mental motion here. So my best (and very lazy) recommendation right now is to also learn it from there.

Though probably it seems worth trying out directly. I expect at least some people might just be able to do this directly, given only the following instruction: "Just give up the struggle."

Dirt Wins

All of this applies to the situation where you think that nothing you do actually matters. I want to tell a little story about how I was wrong about the futility of my own actions in the past.

Once upon a time, I played a round of Zero-k. I think it was my first ever match against another player. In the beginning, it seemed like we were evenly matched, maybe I got a slight advantage. But then after some time, it became very one-sided. All my troops got decimated and I was pushed back into my base. I thought that I would surely lose. But I was not giving up in the face of that. I wanted to fight it out until the end. I definitely felt a pull toward just calling it GG. But I didn't budge. I still tried to do my best. I had no more resources. All I could build was boxes of dirt. But still, I didn't give up. I didn't continue because I thought there is a good chance that I could make a comeback. It was simply raw, unfelt, maybe illogical determination, to not give up.

After some time defending my base using mainly bags of dirt, I managed to slightly push back the enemy. However, it didn't take long and they reorganized an army and came back and again I thought I would surely lose. But still, I didn't give up.

And then something unforeseen happened. My enemy got lazy, or careless. Or perhaps they simply got bored by my persistence? By the fact that I was stretching out the game like an old chewing gum? In any case, I soon managed to accumulate a critical mass of dirt bags. I was starting to throw them at the enemy, slowly but surely pushing them back. That push never ground to a halt for long. Soon I was in the enemy's base, and it was only a matter of time until the dirt prevailed.

15

0
0

Reactions

0
0

More posts like this

Comments7


Sorted by Click to highlight new comments since:

Johannes - thanks for sharing a useful perspective. I think in many cases, you're right that a kind of cool, resigned, mindful, courage in the face of likely doom can be mentally healthy for individuals working on X risk issues. Like the chill of a samurai warrior who tries to face every battle as if his body was already dead -- the principle of hagakure. If our goal is to maximize the amount of X risk reduction research we can do as individuals, it can make sense to find some equanimity while living under the shadow of personal death and species-level extinction.

However, in many contexts, I think that a righteous fury at people who witlessly impose X risks on the rest of us can also be psychologically healthy. As a parent, I'm motivated to protect my kids, by almost any means necessary, against X risks. As a citizen, I feel moral outrage against politicians who ignore X risks. As a researcher, righteous fury against X risks makes me feel more motivated to band together with other, equally infuriated, like-minded researchers, rather than suffering doomy hopelessness alone.

Also, insofar as moral stigma against dangerous technologies (e.g. AI, bioweapons, nukes) might be a powerful way to fight those X risks, righteous anti-doom fury and moral outrage might be more effective than chill resignation. Moral outrage tends to spark moral stigma, which might be exactly what we need to slow the development of dangerous technologies. 

Of course, moral outrage tends to erode epistemic integrity, motivates confirmation bias, reinforces tribalism, can provoke violence (e.g. Butlerian jihads), etc. So there are downsides, but in some contexts, the ethical leadership power and social coordination benefits of moral outrage might outweigh those.

Hagakure is, I think, a useful concept and technique to know. Thank you for telling me about it. I think it is different from what I was describing in this article, but it seems like a technique that you could layer on top. I haven't really done it a lot yet, though I guess there is a good chance that it will work.

I can definitely see that being outraged can be useful on the individual and the societal level. However, I think the major challenge is to steer the outrage correctly. As you say, epidemics can easily go under. I encourage everybody who draws motivation from outrage, to still think carefully through the reasoning for why they are outraged. These should be reasons such that if you would tell them to a neutral curious observer, the reasons alone would be enough to convince them of the thing (without the communication being optimized to convince).

Johannes - I agree that it can be important to try to maintain epistemic integrity even if one feels deep moral outrage about something.

However, there are many circumstances in which people won't take empirically & logically valid arguments about important topics seriously if they're not expressed with an authentic degree of outrage. This is less often the case within EA culture. But it's frequently the case in public discourse. 

It seems that Eliezer Yudkowsky, for example, has often (for over 20 years) tried to express his concerns about AI X-risk fairly dispassionately. But he's often encountered people saying 'If you really took your own arguments seriously, you'd express a lot more moral outrage, and willingness to use traditional human channels for expressing and implementing outrage, such as calls for moral stigmatization of AI, outlawing AI, ostracizing practitioners of AI, etc. (But then, of course, when he does actually argue that nation-states should be willing to enforce a hypothetical global moratorium on AI using the standard military intervention methods (e.g. drone strikes) that are routinely used to enforce international agreements in every other domain, people act all outraged, as if he's preaching Butlerian Jihad. Sometimes you just can't win....)

Anyway, if normal folks see a disconnect between (1) valid arguments that a certain thing X is really really bad and we should reduce it, and (2) a conspicuous lack of passionate moral outrage about X on the part of the arguer, then they will often infer that the arguer doesn't really believe their own argument, i.e. they're treating it as a purely speculative thought experiment, or they're arguing in bad faith, or they're trolling us, etc.

This is a very difficult issue to resolve, but I expect it to be increasingly important as EAs discuss practical ways to slow down AI capability development relative to AI alignment efforts.

I'm not sure if what you say is correct. Maybe. I think there is one difficulty that needs to be taken into account, which is that I i think it is hard to elicit the appropriate reaction. When I see people arguing angrily, I am normally biased against what they say is correct. So I need to make an effort to take them more seriously than I would otherwise do. So it is unclear to me which percentage of people moral outrage would even affect in the way that we want it to affect them.

There's also another issue. Maybe when you are emotionally outraged, it will induce moral outrage in other people. Would it be a good thing to create lots of people who don't really understand the underlying arguments but are really outraged and vocal about the position of AGI being an existential risk? i expect most of these people will not be very good at arguing correctly for AGI being an existential risk. They will make the position look bad and will make other people less likely to take it seriously in the future. Or at least this is one of many hypothetical risks I see.

Johannes - these are valid concerns, I think. 

One issue is: what's the optimal degree of moral anger/outrage to express about a given issue that one's morally passionate about? It probably depends a lot on the audience. Among Rationalist circles, any degree of anger may be seen as epistemically disqualifying, socially embarrassing, ethically dubious, etc. But among normal folks, if one's arguing for an ethical position that they expect would be associated with a moderate amount of moral outrage (if one really believed what one was saying), then expressing that moderate level of outrage might be most persuasive. For example, a lot of political activism includes a level of expressed moral outrage that would look really silly and irrational to Rationalists, but that looks highly appropriate, persuasive, and legitimate to many onlookers. (For example, in protest marches, people aren't typically acting as cool-headed as they would be at a Bay Area Rationalist meet-up -- and it would look very strange if they were.)

Your second issue is even trickier: if it OK to induce strong moral outrage about an issue in people who don't really understand the issue very deeply at a rational, evidence-based level? Well, that's arguably about 98% of politics and activism and persuasion and public culture. If EA as a movement is going to position itself in an ethical leadership role on certain issues (such as AI risk), then we have to be willing to be leaders. This includes making decisions based on reasons and evidence and values and long-term thinking that most followers can't understand, and don't understand, and may never understand.

I don't expect that the majority of humanity will ever be able to understand AI well enough (including deep learning, orthogonality, inner alignment, etc etc) to make well-informed decisions about AI X risk. Yet the majority of humanity will be affected by AI, and by any X risks it imposes. So, either EA people make our own best judgments about AI risk based on our assessments, and then try to persuade people of our conclusions (even if they don't understand our reasoning), or.... what? We try to do cognitive enhancement of humanity until they can understand the issues as well as we do? We hope everybody gets a masters degree in machine learning? I don't think we have the time. 

I think we need to get comfortable with being ethical leaders on some of these issues -- and that includes using methods of influence, persuasion, and outreach that might look very different from the kinds of persuasion that we use with each other.

[anonymous]1
0
0

I rather liked this post (and I’ll put it on both EAF and LW versions)

https://www.lesswrong.com/posts/PQtEqmyqHWDa2vf5H/a-quick-guide-to-confronting-doom

Particularly the comment by Jakob Kraus reminded me that many people have faced imminent doom (not of human species, but certainly quite terrible experiences).

[comment deleted]1
0
0
Curated and popular this week
 ·  · 8m read
 · 
Around 1 month ago, I wrote a similar Forum post on the Easterlin Paradox. I decided to take it down because: 1) after useful comments, the method looked a little half-baked; 2) I got in touch with two academics – Profs. Caspar Kaiser and Andrew Oswald – and we are now working on a paper together using a related method.  That blog post actually came to the opposite conclusion, but, as mentioned, I don't think the method was fully thought through.  I'm a little more confident about this work. It essentially summarises my Undergraduate dissertation. You can read a full version here. I'm hoping to publish this somewhere, over the Summer. So all feedback is welcome.  TLDR * Life satisfaction (LS) appears flat over time, despite massive economic growth — the “Easterlin Paradox.” * Some argue that happiness is rising, but we’re reporting it more conservatively — a phenomenon called rescaling. * I test this hypothesis using a large (panel) dataset by asking a simple question: has the emotional impact of life events — e.g., unemployment, new relationships — weakened over time? If happiness scales have stretched, life events should “move the needle” less now than in the past. * That’s exactly what I find: on average, the effect of the average life event on reported happiness has fallen by around 40%. * This result is surprisingly robust to various model specifications. It suggests rescaling is a real phenomenon, and that (under 2 strong assumptions), underlying happiness may be 60% higher than reported happiness. * There are some interesting EA-relevant implications for the merits of material abundance, and the limits to subjective wellbeing data. 1. Background: A Happiness Paradox Here is a claim that I suspect most EAs would agree with: humans today live longer, richer, and healthier lives than any point in history. Yet we seem no happier for it. Self-reported life satisfaction (LS), usually measured on a 0–10 scale, has remained remarkably flat over the last f
 ·  · 3m read
 · 
We’ve redesigned effectivealtruism.org to improve understanding and perception of effective altruism, and make it easier to take action.  View the new site → I led the redesign and will be writing in the first person here, but many others contributed research, feedback, writing, editing, and development. I’d love to hear what you think, here is a feedback form. Redesign goals This redesign is part of CEA’s broader efforts to improve how effective altruism is understood and perceived. I focused on goals aligned with CEA’s branding and growth strategy: 1. Improve understanding of what effective altruism is Make the core ideas easier to grasp by simplifying language, addressing common misconceptions, and showcasing more real-world examples of people and projects. 2. Improve the perception of effective altruism I worked from a set of brand associations defined by the group working on the EA brand project[1]. These are words we want people to associate with effective altruism more strongly—like compassionate, competent, and action-oriented. 3. Increase impactful actions Make it easier for visitors to take meaningful next steps, like signing up for the newsletter or intro course, exploring career opportunities, or donating. We focused especially on three key audiences: * To-be direct workers: young people and professionals who might explore impactful career paths * Opinion shapers and people in power: journalists, policymakers, and senior professionals in relevant fields * Donors: from large funders to smaller individual givers and peer foundations Before and after The changes across the site are aimed at making it clearer, more skimmable, and easier to navigate. Here are some side-by-side comparisons: Landing page Some of the changes: * Replaced the economic growth graph with a short video highlighting different cause areas and effective altruism in action * Updated tagline to "Find the best ways to help others" based on testing by Rethink
 ·  · 4m read
 · 
Summary I’m excited to announce a “Digital Sentience Consortium” hosted by Longview Philanthropy, in collaboration with The Navigation Fund and Macroscopic Ventures, to support research and applied projects focused on the potential consciousness, sentience, moral status, and experiences of artificial intelligence systems. The opportunities include research fellowships, career transition fellowships, and a broad request for proposals for applied work on these topics.  For years, I’ve thought this area was seriously overlooked. It now has growing interest. Twenty-two out of 123 pages of  Claude 4’s model card are about its potential moral patienthood. Scientific experts increasingly say that near-term AI sentience is a real possibility; even the skeptical neuroscientist Anil Seth says, “it is unwise to dismiss the possibility altogether.” We’re hoping to bring new people and projects into the field to increase the chance that society deals with the possibility of digital sentience reasonably, and with concern for all involved. * Apply to Research Fellowship * Apply to Career Transition Fellowship * Apply to Request for Proposals Motivation & Focus For about as long as I’ve been reading about transformative AI, I’ve wondered whether society would face critical decisions involving AI sentience. Until recently, I thought there was not much to be done here besides perhaps more philosophy of mind and perhaps some ethics—and I was not sure these approaches would make much progress.  Now, I think there are live areas where people can contribute: * Technically informed research on which AI systems are sentient, like this paper applying existing theories of consciousness to a few AI architectures. * Innovative approaches to investigate sentience, potentially in a way that avoids having to take a stand on a particular theory of consciousness, like work on  AI introspection. * Political philosophy and policy research on the proper role of AI in society. * Work to ed