Matrice Jacobine

Student in fundamental and applied mathematics
665 karmaJoined Pursuing a graduate degree (e.g. Master's)France

Bio

Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist

Comments
108

Topic contributions
1

Yes those quotes do refer to the need for a model to develop heterogeneous skills based on private information, and to adapt to changing situations in real life with very little data. I don't see your problem.

How is "heterogeneous skills" based on private information and "adapting to changing situation in real time with very little data" not what continual learning mean?

1) physical limits to scaling, 2) the inability to learn from video data, 3) the lack of abundant human examples for most human skills, 4) data inefficiency, and 5) poor generalization

All of those except 2) boil down to "foundation models have to learn once and for all through training on collected datasets instead of continually learning for each instantiation". See also AGI's Last Bottlenecks.

But the environment (and animal welfare) is still worse off in post-industrial societies than pre-industrial societies, so you cannot credibly claim going from pre-industrial to industrial (which is what we generally mean by global health and development) is an environmental issue (or an animal welfare issue). It's unclear if helping societies go from industrial to post-industrial is tractable, but that would typically fall under progress studies, not global health and development.

I don't think Karpathy would describe his view as involving any sort of discontinuity in AI development. If anything his views are the most central no-discontinuity straight-lines-on-graphes view (no intelligence explosion accelerating the trends, no winter decelerating the trends). And if you think the mean date for AGI is 2035 then it would take extreme confidence (on the order of variance of less than a year) to claim AGI is less than 0.1% likely by 2032!

It's called online learning in AI 2027 and human-like long-term memory in IABIED.

I think it is bad faith to pretend that those who argue for near-term AGI have no idea about any of this when all the well-known cases for near-term AGI (including both AI 2027 and IABIED) name continual learning as the major breakthrough required.

I am skeptical that socioeconomic development increase animal welfare at any point. This is a bit like saying that there weren't any environmentalists before the Industrial Revolution, and there are a lot of environmentalists since then, so clearly this whole industry thing must be really good for the environment.

This seems clearly wrong. If you believe that it would take a literal Manhattan project for AI safety ($26 billion adjusting for inflation) to reduce existential risk by a mere 1% and only care about the current 8 billion people dying, then you can save a present person's life for $325, swamping any GiveWell-recommended charity.

The press on personalities like SBF and Rob Granieri certainly didn’t help

(As a datapoint, I had no idea what Rob Granieri was before reading this post, and I'm probably not the only one because he doesn't seem to have ever been mentioned here before.)

Load more