C

cveres

48 karmaJoined Sep 2022

Comments
38

"The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information" is one of the classic papers of Cognitive Psychology. Is it clickbait?
What about 

The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology.

Fodor’s Guide to Mental Representation: The Intelligent Auntie’s Vade-Mecum.

What Darwin Got Wrong.

Tom swift and his procedural grandmother.

 

Clever titles aren't always clickbait. 

I also wrote a commentary which was downvoted without comments. Later it turned out that the reason was that many people didn't like my title, which was negative on AI and, I admit, a little flamboyant. I changed the title and some people withdrew their downvotes. 

This makes the voting system rather dubious in my opinion.

Hi Noah, thanks for the comment. I think there are a lot of possible questions that I did not tackle. My main interest was to show people an argument that AI won't proceed past the pattern recognition stage in the foreseeable future, no matter how much money is thrown at it by serious people. As I showed in another post, I have good reason to believe that the argument is solid.

The dangers of current AI are real but I am not really involved in trying to estimate that risk.

Hmmm. I hope we are not talking past each other here. I realise that the AI winter will be the failure of AGI. But DL as an analysis tool is so useful that "AI" won't completely disappear. Nor will funding of course, though it will be reduced I suspect once the enthusiasm dies down.

So I hope my current submission is not missing the mark on this, as I don't see any contradiction in my view regarding an "AI winter"

Thanks for the comments, Noah.

I also agree that the "AI winter" will be different this time. Simply because the current AI summer has provided useful tools for dealing with big data, which will always find uses. Expert systems of old had a very limited use and large cost of entry. DL models have a relatively low cost of entry and most businesses have some problems that could benefit from some analysis.

Sorry I think you are misunderstanding the reductio argument. That argument simply undermines the claim that natural language is not based on a generative phrase structure grammar. That is, that non symbolic DL is the "proper" model of language. In fact they are called "language models". I claim they are not models of language, and therefore there is no reason to discard symbolic models ... which is where the need for symbol manipulation comes from. Hence a very different sort of architecture than current DL

And of course we can point to the difference between artificial and biological networks. I didn't because there are too many! One of the big ones is back propagation. THE major reason we have ANNs in the first place, completely implausible biologically. No back propagation in the brain. 

Thanks Noah for your really interesting piece. I actually think we agree on most things. I certainly agree that AI can produce powerful systems without enlightening us about human cognition, or following the same principles. I think chess playing programs were among the first to demonstrate that, because they used massive search trees and lookahead algorithms which no human could do.

Where we diverge I think is when we talk about more general skills like what people envision when they talk about "AGI". Here I think the purely engineering approach won't work because it won't find the solution by learning from observation. For example consider adductive reasoning: finding an argument to the best explanation of some things you observe. For example: "Walking along the beach, you see what looks like a picture of Winston Churchill in the sand. It could be that, as in the opening pages of Hilary Putnam's (1981), what you see is actually the trace of an ant crawling on the beach. The much simpler, and therefore (you think) much better, explanation is that someone intentionally drew a picture of Churchill in the sand. That, in any case, is what you come away believing." (https://stanford.library.sydney.edu.au/archives/spr2013/entries/abduction/)

To be sure, no symbol based theory can answer the question of how we perform adductive reasoning. But, as Jerry Fodor argues in his book "The mind doesn't work that way", connectionist theories can't even ask the question.

Another example follows from my logic example in my first post. That is, we can have complex formulas of prepositional logic, whose truth values are determined by the truth values of their constituents. The question of satisfiability is to see if there is any assignment of truth values to the constituents which will render the whole formula true. Another case where DL can't even ask the question.

For these examples I really do think we have to have machines which, to some extent, rely on similar principles as the human mind. I think this is also true for complex planning, etc,

As for the last part, I am a little sad about the economic motives of AI. I mean at the very beginning the biggest use of the technology was to figure out which link people would click. Advertising is the biggest initial driver of this magic technology. Fortunately we have had more important uses for it in fields like medical technology, farming, and a few other applications I have hard of. Mainly where image recognition is important. That was a significant step forward. Self driving cars are a telling story - very good in conditions where image recognition is all you need, but totally fail in more complex situations where, for example, abductive reasoning is needed.

But still a lot of the monetary drivers are from companies like Facebook and Google who want to support their advertising revenue in one way or another.

I think I may have said something to confuse the issue. Artificial neural networks certainly ARE capable of representing classical symbolic computations. In fact the first neural networks (e.g. perceptron) did just that. They typically do that with local representations where individual nodes assume the role of representing a given variable. But these were not very good at other tasks like generalisation.

More advanced distributed networks emerged with DL being the newest incarnation. These have representations which makes it very difficult (if not impossible) to dedicate nodes to variables. Which does not worry the architects because they specifically believe that the non-localised representation is what makes them so powerful (see Bengio, LeCun and Hinton's article for their Turing award)

Turning to real neurons, the fact is that we really don't know all that much about how they represent knowledge. We know where they tend to fire in response to given stimuli, we know how they are connected, and we know that they have some hierarchical representations. So I can't give you a biological explanation of how neural ensembles can represent variables. All I can do is give you arguments that humans DO perform symbolic manipulation on variables, so somehow their brain has to be able to encode this. 

If you can make an artificial network somehow do this eventually then fine. I will support those efforts. But we are nowhere near that, and the main actors are not even pushing in that direction. 

I'm not sure what Future Fund care about, but they do go into some length defining what they mean by AGI, and they do care about when this AGI will be achieved. This is what I am responding to.

Load more