Hide table of contents
Parent Topic: AI safety

An AI risk is a catastrophic or existential risk arising from the creation of advanced artificial intelligence (AI).

Developments in AI have the potential to enable people around the world to flourish in hitherto unimagined ways. Such developments might also give humanity tools to address other sources of risk.

Despite this, AI also poses its own risks. AI systems sometimes behave in ways that surprise people. At the moment, such systems are usually narrow in their capabilities - for example, they are excellent at Go, or at minimizing power consumption in a server facility, but they can’t do other tasks. If people designed a machine intelligence that was a sufficiently good general reasoner, or even better at general reasoning than people are, it might become difficult for human agents to interfere with its functioning. If it then behaved in a way which did not reflect human values, it might pose a real risk to humanity. Such a machine intelligence might use its intellectual superiority to develop a decisive strategic advantage. If its goals were incompatible with human flourishing, it could then pose an existential risk.

Note that AI could pose an existential risk without being sentient, gaining consciousness, or having any ill will towards humanity. 


(Read more)

Posts tagged AI risk