I think (as do others) that advanced AI could have really big undesired impacts like causing the extinction of people. I also think, with higher confidence, that advanced AI is likely to have some large impacts on the way that people live, without saying exactly what these impacts are likely to be. AI X-risk seems to be regarded as one of the most important potential impacts for AI safety researchers to focus on, particularly by people who think that promoting a long and prosperous future for humans and other living things is a top priority. Considering the amount of work on AI X-risk overall (not just within EA), should a lot more attention be given to AI X-risk? What other AI impacts should receive a lot more attention alongside X-risk?
I am interested in impacts that are explained in a manner that is nearly concrete enough to be the subject of a prediction tournament or prediction market, though some flaws are acceptable. For example, the impact "AI causes the extinction of people in the next 1000 years" has at least two flaws from the point of view of a prediction tournament: first, establishing that AI is responsible for an extinction event might not be straightforward, and second if people are extinct then there will be on one to resolve the question. However, it's concrete enough for my purposes.
Please propose impacts as an answer to this question, and only propose one potential impact per answer. You can also include reasons why you think the identified impact is a priority. If you want to discuss multiple impacts, or say something other than proposing an impact to consider, please post it as a comment instead. And, to reiterate, I'm interested in impacts you think should receive more attention overall, not just more attention within the EA community.
Impact: AI causes the extinction of people in the next 1000 years.
Why is this a priority? Extinction events are very bad from the point of view of people who want the future to be big and utopian. The 1000-year time frame (I think) is long enough to accommodate most timelines for very advanced AI, but short enough that we don't have to worry about "a butterfly flaps its wings and 10 million years later everyone is dead" type scenarios. While it is speculative, it does not seem reasonable given what we know right now to assign this event vanishingly low probability. Finally, my impression is that while it is taken seriously in and near the EA community, it is largely not taken seriously outside the community commensurate with reasonable estimates of subjective likelihood and severity.