Hi Harrison, thanks for stating what I guess a few people are thinking - it's a bit of a clickbait title. I do think though that the non-exponential growth is much more likely than exponential growth just becuase exponential takeoff would require no constraints on growth while it's enough if one constraint kicks in (maybe even one I didn't consider here) to stop exponential growth.
I'd be curious on the methodological overhang though. Are you aware of any posts / articles discussing this further?
Thanks for this, Thomas! See my answer to titotal addressing the algorithm efficiency question in general. Note that if we would follow the hand-wavy "evolutional transfer learning" argument that would weaken the existence proof for sample-efficiency of the human brain. The brain isn't a "general-purpose Tabula Rasa". But I do agree with you that probably we'll find a better algorithm that doesn't scale this badly with data and can extract knowledge more efficiently.
However, I'd argue that as before, even if we find a much much more efficient algorithm, we are in the end limited by the growth of knowledge and the predictability of our world. Epoch estimates that we'll run out of high-quality text data next year, which I would argue is the most knowledge-dense data we have. Even if we find more efficient algorithms, once AI has learnt all this text, it'll have to start generating new knowledge itself, which is much more cumbersome thant "just" absorbing existing knowledge.
Hey Steve, thanks for those thoughts! I think I'm not more qualified than the wikipedia community to argue for or against Moore's law, that's why I just quoted them. So can't give more thoughts on that unfortunately.
But even if Moore's law would continue forever, I think that the data argument would kick in. If we have infinite compute but limited information to learn from, that's still a limited model. Applying infinite compute to the MNIST dataset will give you a model that won't be much better than the latest Kaggle competitor on that dataset.
So then we end up again at the more hand-wavy arguments for limits to the growth of knowledge and predictability of our world in general. Would be curious where I'm losing you there.
Thanks for your thoughts! When writing this up I also felt that the algorithm one is the weakest one, so let me answer from two perspectives:
Thanks for taking the time to formalizing this a bit more. I think you're capturing my ideas quite well and indeed I can't think of ways how this would scale exponentially. Your point on "let's remove the human bottleneck" goes a bit in the direction of the last simulation paragraph where I suggest that you could parallelize knowledge acquisition. But as I argue there I think that's unrealistic to scale exponentially.
In general, I think I focused too much on the robotics examples when trying to illustrate that generating new knowledge takes time and is difficult but the same applies of course also to performing any kind of other experiment that an AI would have to do such as generating knowledge on human psychology by doing experiments with us, testing new training algorithms, performing experiments on quantum physics for chip research, etc.