You can be at peace even when thinking the world is doomed. And while at peace you can still work against that Doom, even while being aware that nothing you do will make a difference. I believe there are states of mind like this that can be inhabited by humans.
Here I am not going to argue for imminent doom, or that nothing that you do matters. Rather, I want to point out that even when you believe in the dire circumstance of imminent unpreventable doom, it is possible to be at peace, even while working hard against the doom. Even while believing this to be futile. This is a possible state of mind for a human being.
And if it is possible, to be at peace, and work hard, even in this dire circumstance, it should be possible in any less dire circumstance too.
There are many games about how long can you survive, e.g. Dawn of War 2 the Last Stand, Serious Sam survival mode, and Project Zomboid. The very nature of these games is that you will soon die. And there is no saving. The difficulty will increase more and more until at some point you will get crushed.
But there are loads of people playing these games. Nothing about the impossibility of achieving victory seems to detract from the fun you can have. Would this really change if these games couldn't be restarted?
There is also the game You Only Live Once that you can only play once.
Do people not play these games? Do people not try hard when playing these games? No. To be fair, there is a big difference between AI doom and these games. In these games, you can make visible progress. The natural way to define success is to ask the questions: How long did you survive? Did you survive longer than last time?
This is where death with dignity and Duncan's advice is coming from, as far as I can tell. It's about redefining success as making as much progress as possible toward getting a good outcome, instead of directly aiming for a good outcome. Aiming to survive forever in Dawn of War 2 the last stand would probably be frustrating. You set out for a goal that you know is unachievable after all.
I think these strategies are valuable, though to me it seems they also miss something very basic.
Maybe this is a fluke and I will feel different soon, but today I felt like my expectation of doom did not influence me negatively. No negative qualia arose, generated by a heuristic in my brain, that "wants" to steer me away from executing a futile plan.
I didn't achieve this by pushing the doominess out of my mind, or by redefining success as getting as far as possible (getting as much dignity as possible). Instead, I was in a state of peace while contemplating the doom, with the relevant considerations plainly laid out in my mind. I think to achieve this you need to stop wanting the doominess to go away. And you need to stop grasping for straws of hope.
This might sound bleak, but the resulting first-person experience that you get is the opposite. There is no more aversion and craving arising. And giving up these negative emotions doesn't need to imply that you stop working on preventing the doom. Being in a state of frantic, continuous panic isn't actually that great for productivity anyway.
When I'm talking about giving up hope and giving up the craving, for wanting the world to be better, I'm talking about silencing the emotional components of your mind. I am not saying anything about changing your consequentialist, conscious reasoning. Mine is still targeted at making the biggest cumulative contribution that I can make, for preventing the doom. There is no contradiction here. In my model, the consequentialist reasoning component of your mind is separate from all of these heuristic algorithms that compute feelings that consequently arise in your consciousness, having a positive or negative valence associated with them, and steer you in particular ways.
Well, I don't really think I have done a good job (or any job whatsoever) of conveying how I managed to do this. I think the fact that I can do this is related to meditation. For example, in the Waking Up app, Sam Harris sometimes gives explicit instructions to "give up the struggle", and I think I just intuitively managed to successfully apply this learned mental motion here. So my best (and very lazy) recommendation right now is to also learn it from there.
Though probably it seems worth trying out directly. I expect at least some people might just be able to do this directly, given only the following instruction: "Just give up the struggle."
Dirt Wins
All of this applies to the situation where you think that nothing you do actually matters. I want to tell a little story about how I was wrong about the futility of my own actions in the past.
Once upon a time, I played a round of Zero-k. I think it was my first ever match against another player. In the beginning, it seemed like we were evenly matched, maybe I got a slight advantage. But then after some time, it became very one-sided. All my troops got decimated and I was pushed back into my base. I thought that I would surely lose. But I was not giving up in the face of that. I wanted to fight it out until the end. I definitely felt a pull toward just calling it GG. But I didn't budge. I still tried to do my best. I had no more resources. All I could build was boxes of dirt. But still, I didn't give up. I didn't continue because I thought there is a good chance that I could make a comeback. It was simply raw, unfelt, maybe illogical determination, to not give up.
After some time defending my base using mainly bags of dirt, I managed to slightly push back the enemy. However, it didn't take long and they reorganized an army and came back and again I thought I would surely lose. But still, I didn't give up.
And then something unforeseen happened. My enemy got lazy, or careless. Or perhaps they simply got bored by my persistence? By the fact that I was stretching out the game like an old chewing gum? In any case, I soon managed to accumulate a critical mass of dirt bags. I was starting to throw them at the enemy, slowly but surely pushing them back. That push never ground to a halt for long. Soon I was in the enemy's base, and it was only a matter of time until the dirt prevailed.
Johannes - these are valid concerns, I think.
One issue is: what's the optimal degree of moral anger/outrage to express about a given issue that one's morally passionate about? It probably depends a lot on the audience. Among Rationalist circles, any degree of anger may be seen as epistemically disqualifying, socially embarrassing, ethically dubious, etc. But among normal folks, if one's arguing for an ethical position that they expect would be associated with a moderate amount of moral outrage (if one really believed what one was saying), then expressing that moderate level of outrage might be most persuasive. For example, a lot of political activism includes a level of expressed moral outrage that would look really silly and irrational to Rationalists, but that looks highly appropriate, persuasive, and legitimate to many onlookers. (For example, in protest marches, people aren't typically acting as cool-headed as they would be at a Bay Area Rationalist meet-up -- and it would look very strange if they were.)
Your second issue is even trickier: if it OK to induce strong moral outrage about an issue in people who don't really understand the issue very deeply at a rational, evidence-based level? Well, that's arguably about 98% of politics and activism and persuasion and public culture. If EA as a movement is going to position itself in an ethical leadership role on certain issues (such as AI risk), then we have to be willing to be leaders. This includes making decisions based on reasons and evidence and values and long-term thinking that most followers can't understand, and don't understand, and may never understand.
I don't expect that the majority of humanity will ever be able to understand AI well enough (including deep learning, orthogonality, inner alignment, etc etc) to make well-informed decisions about AI X risk. Yet the majority of humanity will be affected by AI, and by any X risks it imposes. So, either EA people make our own best judgments about AI risk based on our assessments, and then try to persuade people of our conclusions (even if they don't understand our reasoning), or.... what? We try to do cognitive enhancement of humanity until they can understand the issues as well as we do? We hope everybody gets a masters degree in machine learning? I don't think we have the time.
I think we need to get comfortable with being ethical leaders on some of these issues -- and that includes using methods of influence, persuasion, and outreach that might look very different from the kinds of persuasion that we use with each other.