I think (as do others) that advanced AI could have really big undesired impacts like causing the extinction of people. I also think, with higher confidence, that advanced AI is likely to have some large impacts on the way that people live, without saying exactly what these impacts are likely to be. AI X-risk seems to be regarded as one of the most important potential impacts for AI safety researchers to focus on, particularly by people who think that promoting a long and prosperous future for humans and other living things is a top priority. Considering the amount of work on AI X-risk overall (not just within EA), should a lot more attention be given to AI X-risk? What other AI impacts should receive a lot more attention alongside X-risk?
I am interested in impacts that are explained in a manner that is nearly concrete enough to be the subject of a prediction tournament or prediction market, though some flaws are acceptable. For example, the impact "AI causes the extinction of people in the next 1000 years" has at least two flaws from the point of view of a prediction tournament: first, establishing that AI is responsible for an extinction event might not be straightforward, and second if people are extinct then there will be on one to resolve the question. However, it's concrete enough for my purposes.
Please propose impacts as an answer to this question, and only propose one potential impact per answer. You can also include reasons why you think the identified impact is a priority. If you want to discuss multiple impacts, or say something other than proposing an impact to consider, please post it as a comment instead. And, to reiterate, I'm interested in impacts you think should receive more attention overall, not just more attention within the EA community.
If you want to reduce the risk of going to some form of hell as much as possible, you ought to determine what sorts of “hells” have the highest probability of existing, and to what extent avoiding said hells is tractable. As far as I can tell, the “hells” that seem to be the most realistic are hells resulting from bad AI alignment, and hells resulting from living in a simulation. Hells resulting from bad AI alignment can be plausibly avoided by contributing in some way to solving the AI alignment problem. It’s not clear how hells resulting from living in a simulation could be avoided, but it’s possible that ways to avoid these sorts of hells could be discovered with further analysis of different theoretical types of simulations we may be living in, such as in this map. Robin Hanson explored some of the potential utilitarian implications of the simulation hypothesis in his article How To Live In A Simulation. Furthermore, mind enhancement could potentially reduce S-risks. If you manage to improve your general thinking abilities, you could potentially discover a new way to reduce S-risks.
A Christian or a Muslim could argue that you ought to convert to their religions in order to avoid going to hell. But a problem with Pascal’s Wager-type arguments is the issue of tradeoffs. It’s not clear that practicing a religion is the most optimal way to avoid hell/S-risks. The time spent going to church, praying, and otherwise being dedicated to your religion is time not spent thinking about AI safety and strategizing ways to avoid S-risks. Working on AI safety, strategizing ways to avoid S-risks, and trying to improve your thinking abilities would probably be more effective at reducing your risk of going to some sort of hell than, say, converting to Christianity would.
It mentions finding ways to travel to other universes, send information to other universes, creating a superintelligence to figure out ways to avoid heat death, convincing the creators of the simulation to not turn it off, etc. While these hypothetical ways to survive heat death do involve a lot of speculative physics, they are more than just “defining survival”.
Yet we live in a reality where happiness and suffering exist seemingly by mistake. Your nervous system is the result of millions of years of evolution, not the result of an intelligent designer.