Artificial intelligence (AI) is the set of intellectual capacities characteristic of human beings exhibited by machines, as well as the area of research aimed at creating machines with these capacities.
The literature on AI risk features several commonly used expressions that refer to various types or forms of artificial intelligence. These notions are not always used consistently.
As noted, artificial intelligence is the set of intellectual capacities characteristic of human beings exhibited by machines. But some authors use the term imprecisely, to refer to human-level AI or even to strong AI (a term which itself is very imprecise).
Human-level artificial intelligence (HLAI) is AI that is at least as intelligent as the average or typical human. In one sense, human-level AI requires that the AI exhibits human-level ability in each of the capacities that constitute human intelligence. In another, weaker, sense, the requirement is that these capacities, assessed in the aggregate, are at least equivalent to the aggregate of human capacities. An AI that is weaker than humans on some dimensions, but stronger than humans on others, may count as human-level in this weaker sense. (However, it is unclear how these different capacities should be traded off against one another or what would ground these tradeoffs.) "Human-level artificial intelligence" is sometimes also a synonym for "artificial general intelligence".
Artificial general intelligence (AGI) is AI that does not only exhibit high ability in a wide range of specific domains, but can also optimize across these domains and display other skills that are wide rather than narrow in scope.
High-level machine intelligence (HLMI) is AI that can carry out most human professions at least as well as a typical human. Vincent Müller and Nick Bostrom coined the expression to overcome the perceived deficiencies of existing terminology.
Finally, "strong artificial intelligence" (strong AI) is a multiply ambiguous expression that can mean either "artificial general intelligence", "human-level artificial intelligence" or "superintelligence", among other things.
Today, AI systems are better than even the smartest people at some intellectual tasks, such as chess, but much worse at others, such as writing academic papers. If AI systems eventually become as good or better than humans at many of these remaining tasks, then their impact will likely be transformative. Furthermore, in the extreme case that AI systems eventually become more capable than humans at all intellectual tasks, this would arguably be the most significant development in human history.
Possible impacts of progress in AI include accelerated scientific progress, large-scale unemployment, novel forms of warfare, and risks from unintended behavior in AI systems.
Christiano, Paul (2014) Three impacts of machine intelligence, Rational Altruist, August 23.
An example of the latter is Muehlhauser, Luke (2013) When will AI be created?, Machine Intelligence Research Institute, May 16.
AI Impacts (2014) Human-level AI, AI Impacts, January 23.
Muehlhauser, Luke (2013) What is intelligence?, Machine Intelligence Research Institute, June 19.
See Pennachin, Cassio & Ben Goertzel (2007) Contemporary approaches to artificial general intelligence, in Ben Goertzel & Cassio Pennachin (eds.) Artificial General Intelligence, Berlin: Springer, pp. 1–30. The expression was popularized, but not coined, by Cassio Pennachin and Ben Goertzel. See Goertzel, Ben (2011) Who coined the term “AGI”?, Ben Goertzel’s Blog, August 28.
Müller, Vincent C. & Nick Bostrom (2016) Future progress in artificial intelligence: a survey of expert opinion, in Vincent C. Müller (ed.) Fundamental Issues of Artificial Intelligence, Cham: Springer International Publishing, pp. 555–572.
Wikipedia (2021) Strong AI, Wikipedia, October 18.