This is the third post about my argument to try and convince the Future Fund Worldview Prize judges that "all of this AI stuff is a misguided sideshow". My first post was an extensive argument that unfortunately confused many people.
(The probability that Artificial General Intelligence will be develop)
My second post was much more straightforward but ended up focusing mostly on revealing the reaction that some "AI luminaries" have shown to my argument
(Don't expect AGI anytime soon)
Now, as a result of answering many excellent questions that exposed the confusions caused by my argument, I believe I am in a position to make a very clear and brief summary of the argument in point form.
To set the scene, the Future Fund is interested in predicting when we will have AI systems that can match human level cognition: "This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs." This is a pretty tall order. It means systems with advanced planning and decision making capabilities. But this is not the first time people predicted that we will have such machines. In my first article I reference a 1960 paper which states that the US Air Force predicted such a machine by 1980. The prediction was based on the same "look how much progress we have made, so AGI can't be too far away" argument we see today. There must be a new argument/belief if today's AGI predictions are to bear more fruit than they did in 1960. My argument identifies this new belief. Then it shows why the belief is wrong.
Part 1
- Most of the prevailing cognitive theories involve classical symbol processing systems (with a combinatorial syntax and semantics, like formal logic). For example, theories of reasoning and planning involve logic like processes and natural language is thought by many to involve phrase structure grammars, like for example Python does.
- Good old-fashioned AI was (largely) based on the same assumption, that classical symbol systems are necessary for AI.
- Good old-fashioned AI failed, showing the limitations of classical symbol systems.
- Deep Learning (DL) is an alternative form of computation that does not involve classical symbol systems, and its amazing success shows that human intelligence is not based on classical symbolic systems. In fact, Geoff Hinton in his Turing Award Speech proclaimed that "the success of machine translation is the last nail in the coffin of symbolic AI".
- DL will be much more successful than symbolic AI because it is based on a better model of cognition: the brain. That is, the brain is a neural network, so clearly neural networks are going to be better models.
- But hang on. DL is now very good at producing syntactically correct Python programs. But argument 4. should make us conclude that Python does not involve classical symbolic systems because a non-symbolic DL model can write Python. Which is patently false. The argument becomes a reductio ad absurdum. One of the steps in the argument must be wrong, and the obvious choice is 4, which gives us 7.
- The success of DL in performing some human task tells us nothing about the underlying human competence needed for the task. For example, natural language might well be the production of a generative grammar in spite of the fact that statistical methods are currently better than methods based on parsing.
- Point 7. defeats point 5. There is no scientific reason to believe DL will be much more successful than symbolic AI was in attaining some kind of general intelligence.
Part 2
- In fact, some of my work is already done for me as many of the top experts concede that DL alone is not enough for "AGI". They propose a need for a symbolic system to supplement DL, in order to be able to do planning, high level reasoning, abductive reasoning, and so on.
- The symbolic system should be non-classical because of Part 1 point 2 and 3. That is, we need something better than classical systems because good old-fashioned AI failed as a result of its assumptions about symbol systems.
- DL-symbol systems (whatever those are) will be much better because DL has already shown that classical symbol systems are not the right way to model cognitive abilities.
- But Part 1 point 7 defeats Part 2 point 3. We don't know that DL-symbol systems (whatever those are) will be much better than classical AI because DL has not shown anything about the nature of human cognition.
- We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route. The fact that DL can do Python shows that it is good at mimicking symbolic systems when lots of example productions are available, like language and Python. But it struggles in tasks like planning where such examples aren't there.
- We should instead focus our attention of human-machine symbiosis, which explicitly designs systems that supplement rather than replace human intelligence.
Thanks Noah for your really interesting piece. I actually think we agree on most things. I certainly agree that AI can produce powerful systems without enlightening us about human cognition, or following the same principles. I think chess playing programs were among the first to demonstrate that, because they used massive search trees and lookahead algorithms which no human could do.
Where we diverge I think is when we talk about more general skills like what people envision when they talk about "AGI". Here I think the purely engineering approach won't work because it won't find the solution by learning from observation. For example consider adductive reasoning: finding an argument to the best explanation of some things you observe. For example: "Walking along the beach, you see what looks like a picture of Winston Churchill in the sand. It could be that, as in the opening pages of Hilary Putnam's (1981), what you see is actually the trace of an ant crawling on the beach. The much simpler, and therefore (you think) much better, explanation is that someone intentionally drew a picture of Churchill in the sand. That, in any case, is what you come away believing." (https://stanford.library.sydney.edu.au/archives/spr2013/entries/abduction/)
To be sure, no symbol based theory can answer the question of how we perform adductive reasoning. But, as Jerry Fodor argues in his book "The mind doesn't work that way", connectionist theories can't even ask the question.
Another example follows from my logic example in my first post. That is, we can have complex formulas of prepositional logic, whose truth values are determined by the truth values of their constituents. The question of satisfiability is to see if there is any assignment of truth values to the constituents which will render the whole formula true. Another case where DL can't even ask the question.
For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,
As for the last part, I am a little sad about the economic motives of AI. I mean at the very beginning the biggest use of the technology was to figure out which link people would click. Advertising is the biggest initial driver of this magic technology. Fortunately we have had more important uses for it in fields like medical technology, farming, and a few other applications I have hard of. Mainly where image recognition is important. That was a significant step forward. Self driving cars are a telling story - very good in conditions where image recognition is all you need, but totally fail in more complex situations where, for example, abductive reasoning is needed.
But still a lot of the monetary drivers are from companies like Facebook and Google who want to support their advertising revenue in one way or another.