Artificial intelligence (AI) is the set of intellectual capacities characteristic of human beings exhibited by machines, as well as the area of research aimed at creating machines with these capacities.
The literature on AI risk features several commonly used expressions that refer to various types or forms of artificial intelligence. These notions are not always used consistently.
As noted, artificial intelligence is the set of intellectual capacities characteristic of human beings exhibited by machines. But some authors use the term imprecisely, to refer to human-level AI or even to strong AI (a term which itself is very imprecise).
Human-level artificial intelligence (HLAI) is AI that is at least as intelligent as the average or typical human. In one sense, human-level AI requires that the AI exhibits human-level ability in each of the capacities that constitute human intelligence. In another, weaker, sense, the requirement is that these capacities, assessed in the aggregate, are at least equivalent to the aggregate of human capacities. An AI that is weaker than humans on some dimensions, but stronger than humans on others, may count as human-level in this weaker sense. (However, it is unclear how these different capacities should be traded off against one another or what would ground these tradeoffs.)
Artificial general intelligence (AGI) is AI that does not only exhibit high ability in a wide range of specific domains, but can also generalize across these domains and display other skills that are wide rather than narrow in scope. "Artificial general intelligence" is sometimes also used as a synonym for "human-level artificial intelligence".
High-level machine intelligence (HLMI) is AI that can carry out most human professions at least as well as a typical human. Vincent Müller and Nick Bostrom coined the expression to overcome the perceived deficiencies of existing terminology.
Finally, "strong artificial intelligence" (strong AI) is a multiply ambiguous expression that can mean either "artificial general intelligence", "human-level artificial intelligence" or "superintelligence", among other things.
Today, AI systems are better than even the smartest people at some intellectual tasks, such as chess, but much worse at others, such as writing academic papers. If AI systems eventually become as good or better than humans at many of these remaining tasks, then their impact will likely be transformative. Furthermore, in the extreme case that AI systems eventually become more capable than humans at all intellectual tasks, this would arguably be the most significant development in human history.
Possible impacts of progress in AI include accelerated scientific progress, large-scale unemployment, novel forms of warfare, and risks from unintended behavior in AI systems....