This is the third post about my argument to try and convince the Future Fund Worldview Prize judges that "all of this AI stuff is a misguided sideshow". My first post was an extensive argument that unfortunately confused many people.
(The probability that Artificial General Intelligence will be develop)
My second post was much more straightforward but ended up focusing mostly on revealing the reaction that some "AI luminaries" have shown to my argument
(Don't expect AGI anytime soon)
Now, as a result of answering many excellent questions that exposed the confusions caused by my argument, I believe I am in a position to make a very clear and brief summary of the argument in point form.
To set the scene, the Future Fund is interested in predicting when we will have AI systems that can match human level cognition: "This includes entirely AI-run companies, with AI managers and AI workers and everything being done by AIs." This is a pretty tall order. It means systems with advanced planning and decision making capabilities. But this is not the first time people predicted that we will have such machines. In my first article I reference a 1960 paper which states that the US Air Force predicted such a machine by 1980. The prediction was based on the same "look how much progress we have made, so AGI can't be too far away" argument we see today. There must be a new argument/belief if today's AGI predictions are to bear more fruit than they did in 1960. My argument identifies this new belief. Then it shows why the belief is wrong.
Part 1
- Most of the prevailing cognitive theories involve classical symbol processing systems (with a combinatorial syntax and semantics, like formal logic). For example, theories of reasoning and planning involve logic like processes and natural language is thought by many to involve phrase structure grammars, like for example Python does.
- Good old-fashioned AI was (largely) based on the same assumption, that classical symbol systems are necessary for AI.
- Good old-fashioned AI failed, showing the limitations of classical symbol systems.
- Deep Learning (DL) is an alternative form of computation that does not involve classical symbol systems, and its amazing success shows that human intelligence is not based on classical symbolic systems. In fact, Geoff Hinton in his Turing Award Speech proclaimed that "the success of machine translation is the last nail in the coffin of symbolic AI".
- DL will be much more successful than symbolic AI because it is based on a better model of cognition: the brain. That is, the brain is a neural network, so clearly neural networks are going to be better models.
- But hang on. DL is now very good at producing syntactically correct Python programs. But argument 4. should make us conclude that Python does not involve classical symbolic systems because a non-symbolic DL model can write Python. Which is patently false. The argument becomes a reductio ad absurdum. One of the steps in the argument must be wrong, and the obvious choice is 4, which gives us 7.
- The success of DL in performing some human task tells us nothing about the underlying human competence needed for the task. For example, natural language might well be the production of a generative grammar in spite of the fact that statistical methods are currently better than methods based on parsing.
- Point 7. defeats point 5. There is no scientific reason to believe DL will be much more successful than symbolic AI was in attaining some kind of general intelligence.
Part 2
- In fact, some of my work is already done for me as many of the top experts concede that DL alone is not enough for "AGI". They propose a need for a symbolic system to supplement DL, in order to be able to do planning, high level reasoning, abductive reasoning, and so on.
- The symbolic system should be non-classical because of Part 1 point 2 and 3. That is, we need something better than classical systems because good old-fashioned AI failed as a result of its assumptions about symbol systems.
- DL-symbol systems (whatever those are) will be much better because DL has already shown that classical symbol systems are not the right way to model cognitive abilities.
- But Part 1 point 7 defeats Part 2 point 3. We don't know that DL-symbol systems (whatever those are) will be much better than classical AI because DL has not shown anything about the nature of human cognition.
- We have no good reason, only faith and marketing, to believe that we will accomplish AGI by pursuing the DL based AI route. The fact that DL can do Python shows that it is good at mimicking symbolic systems when lots of example productions are available, like language and Python. But it struggles in tasks like planning where such examples aren't there.
- We should instead focus our attention of human-machine symbiosis, which explicitly designs systems that supplement rather than replace human intelligence.
Well, I meant to communicate what I took you to mean, that is, that no software model (or set of principles from a cognitive theory that has a programmatic representation) so far used in AI software represents actual human cognition. And that therefore, as I understood you to mean, there's no reason to believe that AI software in future will achieve a resemblance to human cognitive capabilities. If you meant something far different, that's OK.
AI researchers typically satisfy themselves with creating the functional equivalent of a human cognitive capability. They might not care that the models of information processing that they employ to design their software don't answer questions about how human cognition works.
Lets keep on agreeing that there is not an isomorphism between:
* software/hardware models/designs that enable AI capabilities.
* theoretical models of human cognitive processes, human biology, or human behaviors.
In other words, AI researchers don't have to theorize about processes like human language acquisition. All that matters is that the AI capabilities that they develop meet or exceed some standard of intelligence. That standard might be set by human performance or by some other set of metrics for intelligence. Either way, nobody expects that reaching that standard first requires that AI researchers understand how humans perform cognitive tasks (like human language acquisition).
It is obvious to me that advances in AI will continue and that contributions will come from theoretical models as well as hardware designs. Regardless of whether anyone ever explains the processes behind human cognitive capabilities, humans could make some suspiciously intelligent machines.
I follow one timeline for the development of world events. I can look at it over long or short time spans or block out some parts while concentrating on others, but the reality is that other issues than AGI shape my big picture view of what's important in future. Yes, I do wait and see what happens in AGI development, but that's because I have so little influence on the outcomes involved, and no real interest in producing AGI myself. If I ever make robots or write software agents, they will be simple, reliable, and do the minimum that I need them to do.
I have a hunch that humanoid robot development will be a faster approach to developing recognizably-intelligent AGI as opposed to concentrating on purely software models for AGI.
Improving humanoid robot designs could leapfrog issues potentially involved in:
but I think people already know that.
Conversely, concentrating on AI with narrow functionality serves purposes like:
and people already know that too.
AI with narrow functionality will cause technological unemployment just like AGI could. I think there's a future for AI with either specific or more general capabilities, but only because powerful financial interests will support that future.
And look, this contest that FTX is putting on has a big pot most of which might not get awarded. Not only that, but not all submissions will even be "engaged with". That means that no one will necessarily read them. They will be filtered out. The FTX effort is a clever way to crowd-source but it is counterproductive.
If you review the specifications and the types of questions, you don't see questions like:
Those questions are less sexy than questions about chances of doom or great wealth. So the FTX contest questions are about the timing of capability development. Now, if I want to be paid to time that development, I could go study engineering for awhile and then try for a good job at the right company where I also drive that development. The fact is, you have to know something about capability development in order to time it. And that's what throws this whole contest off.
If you look at the supposed upside of AGI, underpaid AGI (25$/hr? Really?) doing work in an AI company, you don't see any discussion of AGI rights or autonomy. The whole scenario is implausible from several perspectives.
All to say, if you think that AGI fears are a distracting sideshow, or based on overblown marketing, then I am curious about your thoughts on the economic goals behind AGI pursuit. Are those goals also hype? Are they overblown? Are they appropriate? I wonder what you think.