I have learned a lot since my posts on the EA and LessWrong forums in response to the call from the Future Fund Worldview Prize. I received many insightful comments from people who just didn't want to give up on the idea of AGI. I decided to briefly summarise the main arguments from the comments as well as some related posts, because I feel this has really clarified the essential points.

The announcement is quite specific that they want to call into question "Future Fund's fundamental assumptions about the future of AI", which is that "artificial intelligence (AI) is the development most likely to dramatically alter the trajectory of humanity this century". These are bold beliefs considering the possibility that "all of this AI stuff is a misguided sideshow". It was my intention to show that in fact it IS a misguided sideshow, which calls into question the size of the impact AI will have on society. But this lead many people to misinterpret my views on whether or not AI might pose dangers for us in the future. My feeling on this is exactly what is stated in the call: "AI is already posing serious challenges: transparency, interpretability, algorithmic bias ....". I agree. The real question I was concerned with is whether or not a more profound version of AI is coming soon, which could be called AGI or even more dramatically, Superintelligence (Nick Bostrom). This is where I think the claims and predictions are unrealistic.

There are many arguments people advance in support of their belief that there will be such a change. As far as I can tell, most of these boil down to a kind of extrapolation, a view that is eloquently captured by another post in this stream: "We've already captured way too much of intelligence with way too little effort. Everything points towards us capturing way more of intelligence with very little additional effort." The problem with this argument is that it is very similar to arguments made in the late 1950s in a previous wave of AI optimism. For example Licklider (1960) points out that "Even now, Gelernter's IBM-704 program for proving theorems in plane geometry proceeds at about the same pace as Brooklyn high school students ..... there are, in fact, several theorem-proving, problem-solving, chess-playing, and pattern-recognizing programs ... capable of rivaling human intellectual performance in restricted areas.." and the U.S. Airforce "estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance." We all know where those estimates ended up. One possibly saving grace lies in Nick Bostrom's argument that the computational power in those days was not sufficient so we should not have expected the algorithms to really achieve these goals, and more powerful hardware might reverse the failure. Nevertheless, Nick also concedes that advances in an alternative "bottom-up" form of learning and cognition are probably also required.

This is the second prong in the argument for AGI, that in fact we are now firmly entrenched in a  "bottom-up" paradigm that will overcome the limitations of past approaches. The argument is that Deep Learning (DL) has not only given us models that can perform impressive tasks in the areas of visual perception and language, but has given us a new scientific paradigm which better approximates properties of thinking and learning which humans have. In fact the need for a paradigm shift is now acknowledged by both critics and advocates of AI. In this exchange Gary Marcus and Yann LeCun both paraphrase the old parable: “Okay, we built this ladder, but we want to go to the moon, and there's no way this ladder is going to get us there” (LeCun). What we need is a rocket ship, and Browning and LeCun argue that this rocket ship must jettison the old ideas like Newell and Simon's "Physical Symbol System Hypothesis", which states that "A physical symbol system has the necessary and sufficient means for intelligent action.". This, then is the often unstated reason for the optimism that this time the current successes will continue continue all the way to AGI

My argument defeats the Browning and LeCun position that DL and gradient descent learning supply an alternative that can dispense with the Physical Symbol System Hypothesis. This undermines the main reason to believe that DL approaches will deliver on their promises any more than the systems of the 50s did. Many people misunderstood the argument and thought I was proving that AGI was impossible. But this is not true. The argument undermines the claim that DL systems that are currently the object of research are sufficient to achieve AGI. It leaves open the possibility that in 500 years we might finally get there using some new advances that we cannot yet imagine.

Some people thought my argument was that AI had to discover exactly the same algorithms that the human mind uses, but this is not the case. We know even from computer programming that similar functions can be implemented in many different ways, and even using different programming languages. The point is that there must be some fundamental properties of the implementation, such as the use of variables and control structures, a syntax of expressions that can be used by a compiler/interpreter, and so on. This is an analogy of course, but my claim is that some cognitive processes seem to involve symbol systems and there is no reason to believe that DL has shown this to be false, and therefore cannot eliminate the need for these systems. Another misunderstanding is that neural networks can't implement symbol systems. The fact is that they can, as the classic paper "A LOGICAL CALCULUS OF THE IDEAS IMMANENT IN NERVOUS ACTIVITY*" by McCulloch and Pitts shows. The point is that such neural networks are severely limited, and there is at the moment no serious effort to implement systems with complex symbol manipulating abilities. In fact this is precisely the kind of effort that Browning and LeCun discourage.

So there it is. People who believe that AGI is imminent, do so because the prevailing winds are saying that we are finally onto something that is a closer match to human cognition than anything we have ever tried. The winds sound like a storm, but they are really just a whimper. 

13

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since: Today at 2:15 PM

My argument defeats the Browning and LeCun position that DL and gradient descent learning supply an alternative that can dispense with the Physical Symbol System Hypothesis. This undermines the main reason to believe that DL approaches will deliver on their promises any more than the systems of the 50s did.

 

The argument was hard for me to follow, and at the end your conclusion was hard to determine. You discussed it with several people in the comments, but even after some changes, I think it could use a redo.

So there it is. People who believe that AGI is imminent, do so because the prevailing winds are saying that we are finally onto something that is a closer match to human cognition than anything we have ever tried. The winds sound like a storm, but they are really just a whimper. 

I understand what you mean, but the FTX fund is serious, and so are the industry giants interested in this last mile of automation. They don't have to listen to the wind, they are throwing money at this stuff in one way or other. While I understand your argument against the hype, I doubt that makes a satisfying answer to the overall question of timing and dangers. Machine learning technology is a trend in the software industry where it has immediate applications. However, research organizations are savvy. They will look at other ideas and probably are right now.

I would like to see your thinking put to writing about the dangers of AI, particularly if you can provide historical context for a convincing argument about simpler AI still leading to existential crises.

Hi Noah, thanks for the comment. I think there are a lot of possible questions that I did not tackle. My main interest was to show people an argument that AI won't proceed past the pattern recognition stage in the foreseeable future, no matter how much money is thrown at it by serious people. As I showed in another post, I have good reason to believe that the argument is solid.

The dangers of current AI are real but I am not really involved in trying to estimate that risk.

More from cveres
Curated and popular this week
Relevant opportunities