Hide table of contents

For people working on AI safety and other existential risks, what emotionally motivates you to work? What gets you up in the morning?

Here are some possibilities I've thought of: 
1.  Curiosity-driven, with curiosity directed towards endorsed sub-areas (e.g. likes coding, math, problem-solving)
2. Deeply internalized consequences (harnessing fear of death, or deeply wanting positive worlds) 
3. Social motivations (wanting status or success, likes working as part of a collective, other)

Follow-up: Say I'm afraid of internalizing responsibility for working on important, large problems. Assuming you've solved this, what kind of narratives do you have or what strategies do you use?

16

0
0

Reactions

0
0
New Answer
New Comment

5 Answers sorted by

Personally:

  • 5% internalized consequences
  • 45% intellectual curiosity
  • 50% status

I'm sort of joking. Really, I think it's that "motivation" is at least a couple things. In the grand scheme of things, I tell myself "this research is important". Then day to day, I think "I've decided to do this research, so now I should get to work". Then every once in a while, I become very unmotivated, and I think about what's actually at stake here, and also about the fact that some Very Important People I Respect tell me this is important.

Good question. I think I’m maybe quarterway there to be internally/emotionally driven to do what I can to prevent the worst possible AI failures, but re this

Say I'm afraid of internalizing responsibility for working on important, large problems

I always thought it would be a great thing if my emotional drives would line up more with the goals that I deliberately thought through to be likely the most important. It would feel more coherent, it would give me more drive and focus on what matters, and downregulate things like some social motivations that I don’t endorse fully. I suppose one might be worried that it’s overwhelming, but that hasn‘t been a thing for me so far. I wonder if humans mostly deal okay with great responsibilities, which is my spontaneous impression.

(btw, I really enjoyed reading your PhD retrospective, nice to see your name pop up here! I’m doing a PhD in CogSci and could relate to a lot)

I'm mostly concerned with S-risks, i.e. risks of astronomical suffering. I view it as a more rational form of Pascal's Wager, and as a form of extreme longtermist self-interest. Since there is still a >0% chance of some form of afterlife or a bad form of quantum immortality existing, raising awareness of S-risks and donating to S-risk reduction organizations like the Center on Long-Term Risk and the Center for Reducing Suffering likely reduces my risk of going to "hell". See The Dilemma of Worse Than Death Scenarios.

The dilemma is that it does not seem possible to continue living as normal when considering the prevention of worse than death scenarios. If it is agreed that anything should be done to prevent them then Pascal's Mugging seems inevitable. Suicide speaks for itself, and even the other two options, if taken seriously, would change your life. What I mean by this is that it would seem rational to completely devote your life to these causes. It would be rational to do anything to obtain money to donate to AI safety for example, and you would be obliged to sleep for exactly nine hours a day to improve your mental condition, increasing the probability that you will find a way to prevent the scenarios. I would be interested in hearing your thoughts on this dilemma and if you think there are better ways of reducing the probability.

I don't directly work on x-risks at the moment, apart from some of my time sharing resources and giving information or career advice relevant to it to people in EA Philippines. 

But if I did, the importance and drive of making sure the world doesn't become extinct or dystopian would be what would motivate me to work. Also, all the work done by other EAs in other causes would be for naught if we end up becoming extinct or lock in a bad future this century.

all the work done by other EAs in other causes would be for naught if we end up becoming extinct

I've seen this argument elsewhere, and still don't find it convincing. "All" seems hyperbolic. Much longtermist work to improve the quality of posthumans' lives does become irrelevant if there won't be any posthumans. But animal welfare, poverty reduction, mental health, and probably some other causes I'm forgetting will still have made an important (if admittedly smaller-scale) difference by relieving their beneficiaries' suffering.

Curated and popular this week
Relevant opportunities