I have generally been quite skeptical about the view that we are on the cusp of a revolution that will lead us to artificial general intelligence in the next 50 years so.
Aside from fundamental limitations of current AI systems, and flaws of extrapolating their remarkable ability at narrow tasks towards more general learning by appealing to "exponential" growth, there is another issue with the discourse on AI that I want to highlight.
One of the primary reasons to believe that AGI will happen in the near to mid term future comes from predictions of experts working in the field, the majority of whom seem to think that we will have AGI latest by 2100.
While there is every reason to attach credence to their perspective, it should be noted that deep learning, the framework that underpins most of recent developments in AI including language models like BERT and GPT-3 and strategy-game champions like AphaGo are notoriously hard to decipher from a theoretical perspective.
It would be a mistake to assume that people who design, develop and deploy these models necessarily understand why they happen to be as successful as they are. This may sound like a rather strange statement to make but the reality is that despite incredible pace of progress across various frontiers of AI with deep learning, our knowledge of why it works -- the mathematical theory of it -- lags behind immensely.
To be clear, I am not at all suggesting that research scientists at Google or DeepMind have no knowledge all of why models they design and deploy work. They are certainly guided by various ideas and heuristics when deciding on the loss function, the type of attention mechanism to use, the iterative update to the reward, the overall architecture of the network, etc. However, there are two things to note here : first, a lot of the design is based on experimenting with various functional forms, wiring combinations, convolution structure, parameter choices; second, the fact that there are heuristics and high level understanding of what is happening does not imply that there is a first-principles mathematical explanation for it.
There are people study the theoretical side of deep learning work towards establishing exact results and also aim to understand why the model training process is so incredibly successful. The progress there has been rather limited, and certainly well behind where the state-of-the-art in terms of performance is. There are a lot of unusual things with deep learning and among them the fact that core concepts in conventional machine learning simply does not seem to apply (such as overfitting). For a more technical view on this, watch this amazing talk by Sanjeev Arora where he explains how intriguing deep learning model and training is.
This should be contrasted with physics where our understanding of theories is much deeper and fundamental. There is a very precise mathematical framework to characterize the physics of say, electrons or quarks, and, at the other end of the spectrum, a model to understand cosmology. There is no such thing even remotely comparable to that in deep learning.
Given all this, one should be more skeptical about prediction timelines for a qualitatively superior intelligence from experts in this field. The fact that there are considerable gaps in our understanding would suggest that expert opinion is perhaps guided less by some deeper insight into the learning and generalization process of AI models and more by higher level examination of the rapid progress of AI, i.e., their views may be relatively more closer to that of a lay person. Couple this with the fact that we have a very limited understanding of human consciousness and how that is related to the electro-physiological properties of the brain. Such limitations impose considerable challenges to predict with any degree of certainty.
If we don't have convincing evidence in favor of a timeline <50 years, and we also don't have convincing evidence in favor of a timeline ≥50 years, then we just have to say that this is a question on which we don't have convincing evidence of anything in particular. But we still have to take whatever evidence we have and make the best decisions we can. ¯\_(ツ)_/¯
(You don't say this explicitly but your wording kinda implies that ≥50 years is the default, and we need convincing evidence to change our mind away from that default. If so, I would ask why we should take ≥50 years to be the default. Or sorry if I'm putting words in your mouth.)
Lots of ingredients go into AGI, including (1) algorithms, (2) lots of inexpensive chips that can do lots of calculations per second, (3) technology for fast communication between these chips, (4) infrastructure for managing large jobs on compute clusters, (5) frameworks and expertise in parallelizing algorithms, (6) general willingness to spend millions of dollars and roll custom ASICs to run a learning algorithm, (7) coding and debugging tools and optimizing compilers, etc. Even if you believe that you've made no progress whatsoever on algorithms since the 1950s, we've made massive progress in the other categories. I think that alone puts us "significantly closer to AGI today than we were in the 1950s": once we get the algorithms, at least everything else will be ready to go, and that wasn't true in the 1950s, right?
But I would also strongly disagree with the idea that we've made no progress whatsoever on algorithms since the 1950s. Even if you think that GPT-3 and AlphaGo have absolutely nothing whatsoever to do with AGI algorithms (which strikes me as an implausibly strong statement, although I would endorse much weaker versions of that statement), that's far from the only strand of research in AI, let alone neuroscience. For example, there's a (IMO plausible) argument that PGMs and causal diagrams will be more important to AGI than deep neural networks are. But that would still imply that we've learned AGI-relevant things about algorithms since the 1950s. Or as another example, there's a (IMO misleading) argument that the brain is horrifically complicated and we still have centuries of work ahead of us in understanding how it works. But even people who strongly endorse that claim wouldn't also say that we've made "no progress whatsoever" in understanding brain algorithms since the 1950s.
Sorry if I'm misunderstanding.
I'm a bit confused by this; are you saying that the only possible AGI algorithm is "the exact algorithm that the human brain runs"? The brain is wired up by a finite number of genes, right?