This is a linkpost for this LW / alignmentforum post. Summary:

I argue that an entire class of common arguments against short timelines is bogus, and provide weak evidence that anchoring to the human-brain-human-lifetime milestone is reasonable. 

In a sentence, my argument is that the complexity and mysteriousness and efficiency of the human brain (compared to artificial neural nets) is almost zero evidence that building TAI will be difficult, because evolution typically makes things complex and mysterious and efficient, even when there are simple, easily understood, inefficient designs that work almost as well (or even better!) for human purposes.

In slogan form: If all we had to do to get TAI was make a simple neural net 10x the size of my brain, my brain would still look the way it does. 

The case of birds & planes illustrates this point nicely. Moreover, it is also a precedent for several other short-timelines talking points, such as the human-brain-human-lifetime (HBHL) anchor.




Sorted by Click to highlight new comments since:

This is really persuasive to me, thanks for posting. Previously I’d heard arguments anchoring AGI timelines to the amount of compute used by the human brain, but I didn’t see much reason at all for our algorithms to use the same amount of compute as the brain. But you point to the example of flight, where all the tricky issues of how to get something to fly were quickly solved almost as soon as we built engines as powerful as birds. Now I’m wondering if this is a pattern we’ve seen many times — if so, I’d be much more open to anchoring AI timelines on the amount of compute used by the human brain (which would mean significantly shorter timelines than I’d currently expect).

So my question going forward would be: What other machines have humans built to mimic the functionality of living organisms? In these cases, do we see a single factor driving most progress, like engine power or computing power? If so, do machines perform as well as living organisms with similar levels of this key variable? Or, does the human breakthrough to performing on-par with evolution come at a more random point, driven primarily by one-off insights or by a bunch of non-obvious variables?

Within AI, you could examine how much compute it took to mimic certain functions of organic brains. How much compute does it take to build human-level speech recognition or image classification, and how does that compare to the compute used in the corresponding areas of the human brain? (Joseph Carlsmith’s OpenPhil investigation of human level compute covered similar territory and might be helpful here, but I haven’t gone through it in enough detail to know.)

Does transportation offer other examples? Analogues between boats and fish? Land travel and fast mammals?

I’m having trouble thinking of good analogues, but I’m guessing they have to exist. AI Impacts’ discontinuities investigation feels like a similar type of question about examples of historical technological progress, and it seems to have proven tractable to research and useful once answered. I’d be very interested in further research in this vein — anchoring AGI timelines to human compute estimates seems to me like the best argument (even the only good argument?) for short timelines, and this post alone makes those arguments much more convincing to me.

Two good examples mentioned by Ajeya on the 80,000 Hours podcast: eyes vs cameras, and leaves vs solar panels.

Curated and popular this week
Relevant opportunities