For people working on AI safety and other existential risks, what emotionally motivates you to work? What gets you up in the morning?
Here are some possibilities I've thought of:
1. Curiosity-driven, with curiosity directed towards endorsed sub-areas (e.g. likes coding, math, problem-solving)
2. Deeply internalized consequences (harnessing fear of death, or deeply wanting positive worlds)
3. Social motivations (wanting status or success, likes working as part of a collective, other)
Follow-up: Say I'm afraid of internalizing responsibility for working on important, large problems. Assuming you've solved this, what kind of narratives do you have or what strategies do you use?
I've seen this argument elsewhere, and still don't find it convincing. "All" seems hyperbolic. Much longtermist work to improve the quality of posthumans' lives does become irrelevant if there won't be any posthumans. But animal welfare, poverty reduction, mental health, and probably some other causes I'm forgetting will still have made an important (if admittedly smaller-scale) difference by relieving their beneficiaries' suffering.