As part of a presentation I'll be giving soon, I'll be spending a bit of time explaining why AI might be an x-risk.
Can anyone point me to existing succinct explanations for this, which are as convincing as possible, brief (I won't be spending long on this), and (of course) demonstrating good epistemics.
The audience will be actuaries interested in ESG investing.
If someone fancies entering some brief explanations as an answer, feel free, but I was expecting to see links to content which already exists, since I'm sure there's loads of it.
I think it's important to give the audience some sort of analogy that they're already familiar with, such as evolution producing humans, humans introducing invasive species in new environments, and viruses. These are all examples of "agents in complex environments which aren't malicious or Machiavellian, but disrupt the original group of agents anyway".
I believe these analogies are not object-level enough to be arguments for AI X-risk in themselves, but I think they're a good way to help people quickly understand the danger of a superintelligent, goal-directed agent.