This is a linkpost for https://www.lesswrong.com/posts/amK9EqxALJXyd9Rb2/paths-to-high-level-machine-intelligence
In this post, we map out cruxes of disagreement relevant for AI timelines and paths to high-level machine intelligence (HLMI).
We examine both 1) hardware progression and 2) AI progression and requirements. While (1) is relatively straightforward, the bulk of the post focusses on (2). For (2), we consider three methods of estimating AI timelines: an inside-view "gears level" model of specific pathways to HLMI, analogies between HLMI and other developments, and extrapolations of automation and progress in AI subfields.
For the inside-view estimate, the pathways we consider include: current deep learning plus "business-as-usual" advances, hybrid statistical-symbolic AI, whole brain emulation, and so on. For each pathway, we consider hardware requirements (compared to hardware available, found in (1)), and software requirements (dependent on various cruxes, such as the creation of an adequate environment or sufficient brain-scanning technology).
The other two estimation methods are likewise dependent on further cruxes, such as whether or not algorithmic progress is mainly driven by hardware progress.
This post is part of a project in collaboration with David Manheim, Aryeh Englander, Sammy Martin, Issa Rice, Ben Cottier, Jérémy Perret, Ross Gruetzemacher, and Alexis Carlier.
We think three main groups of people would benefit from reading the post:
- Those who don't understand why different smart, knowledgable people disagree so much on these topics, and would like to understand better
- Those who are trying to form their own views on these topics, and are unsure what factors to consider
- Those who already have a general understanding of most of the key disagreements, but would like to dig deeper into others
Again, here's a link to the post: https://www.lesswrong.com/posts/amK9EqxALJXyd9Rb2/paths-to-high-level-machine-intelligence