CH

Christoph Hartmann

54 karmaJoined Aug 2022

Comments
5

Thanks for taking the time to formalizing this a bit more. I think you're capturing my ideas quite well and indeed I can't think of ways how this would scale exponentially. Your point on "let's remove the human bottleneck" goes a bit in the direction of the last simulation paragraph where I suggest that you could parallelize knowledge acquisition. But as I argue there I think that's unrealistic to scale exponentially.

In general, I think I focused too much on the robotics examples when trying to illustrate that generating new knowledge takes time and is difficult but the same applies of course also to performing any kind of other experiment that an AI would have to do such as generating knowledge on human psychology by doing experiments with us, testing new training algorithms, performing experiments on quantum physics for chip research, etc.

Hi Harrison, thanks for stating what I guess a few people are thinking - it's a bit of a clickbait title. I do think though that the non-exponential growth is much more likely than exponential growth just becuase exponential takeoff would require no constraints on growth while it's enough if one constraint kicks in (maybe even one I didn't consider here) to stop exponential growth.

I'd be curious on the methodological overhang though. Are you aware of any posts / articles discussing this further?

Thanks for this, Thomas! See my answer to titotal addressing the algorithm efficiency question in general. Note that if we would follow the hand-wavy "evolutional transfer learning" argument that would weaken the existence proof for sample-efficiency of the human brain. The brain isn't a "general-purpose Tabula Rasa". But I do agree with you that probably we'll find a better algorithm that doesn't scale this badly with data and can extract knowledge more efficiently.

However, I'd argue that as before, even if we find a much much more efficient algorithm, we are in the end limited by the growth of knowledge and the predictability of our world. Epoch estimates that we'll run out of high-quality text data next year, which I would argue is the most knowledge-dense data we have. Even if we find more efficient algorithms, once AI has learnt all this text, it'll have to start generating new knowledge itself, which is much more cumbersome thant "just" absorbing existing knowledge.

Hey Steve, thanks for those thoughts! I think I'm not more qualified than the wikipedia community to argue for or against Moore's law, that's why I just quoted them. So can't give more thoughts on that unfortunately.

But even if Moore's law would continue forever, I think that the data argument would kick in. If we have infinite compute but limited information to learn from, that's still a limited model. Applying infinite compute to the MNIST dataset will give you a model that won't be much better than the latest Kaggle competitor on that dataset.

So then we end up again at the more hand-wavy arguments for limits to the growth of knowledge and predictability of our world in general. Would be curious where I'm losing you there.

Thanks for your thoughts! When writing this up I also felt that the algorithm one is the weakest one, so let me answer from two perspectives:

  • From the room to invent new algorithms: Convolutional neural networks have been around since the 80s, we've been using GPUs to run them since about 10 years. If there really would be huge potential left, I'd be a bit surprised that we didn't find it in the last 40 years already - we certainly had incentives because hardware was so slow and people had to optimize, but of course you never know. I tried to find a paper reviewing efficiency improvements of non-negative matrix factorization over time, I think that could be a fun guide, but couldn't find one.
  • From the brain perspective: Yes, it's puzzling that the brain can do all this on 12 watts power while OpenAI is using server farms that consume much much more than that. So somewhere there must be huge efficiency gains. Note that that's mostly on the training side - "evaluating" a network is pretty efficient as far as I know. For training, there could be different reasons:
    • Transfer learning: Maybe the "computation of evolution" just "pre-programmed" our brain similar to how we use transfer learning. It's already pretty close to where we want it and we just need to fine tune. Transfer learning on neural networks is already pretty cheap today. One argument supporting this would be that many animals are perfectly functional from day 1 of their life without much learning. Of course not same level of intelligence, but still.
    • Hardware: The brain doesn't run on silicone. We use a very very abstracted version of our brain and there is much more going on biologically. Some people argue that a lot of computation is already happening in the dendrites, maybe the morphology of neurons has effects on computation, maybe the specific nonlinearity applied by the neurons is more relevant than we think, ... . One way to try to adress this would be to build chips that are more similar ("neuromorphic") but I haven't seen much progress there
    • Architecture: The brain isn't a CNN. This might be a good approximation for our sensory cortices but even there it's not the same. The brain is very recurrent, not feed-forward, and it can't send signals back through it's synapses and therefore can't implement backpropagation. Maybe we're just using the wrong architecture and if we find the right one it's going to go much faster. I did my PhD on something related to this and I gave up haha, but of course, I'm sure there are lots of things to be discovered here.