I wrote this several months ago for LessWrong, but it seemed useful to have crossposted here.
It's a writeup of several informal conversations I had with Andrew Critch (of the Berkeley Existential Risk Initiative) about what considerations are important for taking AI Risk seriously, based on his understanding of the AI landscape. (The landscape has changed slightly in the past year, but I think most concerns are still relevant)
Raymond, do you or Andrew Critch have any concrete possibilities in mind for what "orienting one's life"/"understanding the situation" might look like from a non-altruistic perspective? I'm interested in hearing concrete ideas for what one might do; the only suggestions I can recall seeing so far were mentioned in the 80,000 Hours podcast episode with Paul Christiano, to save money and invest in certain companies. Is this the sort of thing you had in mind?
The way I am imagining it, a person thinking about this from a non-altruistic perspective would then think about the problem for several years and would narrow this list down (or add new things to it) and act on some subset of them (e.g. maybe they would think about which companies to invest in and decide how much money to save, but to not implement some other idea). Is this an accurate understanding of your view?
(Off the cuff thoughts, which are very low confidence. Not attributed to Critch at all)
So, this depends quite a bit on how you think the world is shaped (which is a complex enough question that Critch made the recommendation to just think about it for weeks or months). But the three classes of answer that I can think of are:
a) in many possible worlds, the selfish and altruistic answers are just the same. The best way to survive a fast or even moderate takeoff is to ensure a positive singularity, and just pouring your efforts and money into maximizing the c... (read more)