I found this interview with Francois Chollet fascinating, and would be curious to hear what other people make of it.
I think it is impressive that he's managed to devise a benchmark of tasks which are mostly pretty easy for most humans, but which LLMs have so far not been able to make much progress with.
If you don't have time to watch the video, then I think these tweets of his sum up his views quite well:
The point of general intelligence is to make it possible to deal with novelty and uncertainty, which is what our lives are made of. Intelligence is the ability to improvise and adapt in the face of situations you weren't prepared for (either by your evolutionary history or by your past experience) -- to efficiently acquire skills at novel tasks, on the fly.
Meanwhile what the AI of today does is to combine extremely weak generalization power (i.e. ability to deal with novelty and uncertainty) with a dense sampling of everything it might ever be faced with -- essentially, use brute-force scale to *by-pass* the problem of intelligence entirely.
If intelligence is the ability to deal with what you weren't prepared for, then the modern AI strategy is to prepare for everything, so you never need intelligence. This is of course a terrible strategy, because it is impossible to prepare for everything. The problem isn't just scale, the problem is the fact that the real world isn't sampled from a static distribution -- it is ever changing and ever novel.
If his take on things is correct, I am not sure exactly what this implies for AGI timelines. Maybe it would mean that AGI is much further off than we think, because the impressive feats of LLMs that have led us to think it might be close have been overinterpreted. But it seems like it could also mean that AGI will arrive much sooner? Maybe we already have more than enough compute and training data for superhuman AGI, and we are just waiting on that one clever idea. Maybe that could happen tomorrow?
The ARC Prize website takes this definitional stance on AGI:
Something like the former definition, central to reports like Tom Davidson's CCF-based takeoff speeds for Open Phil, basically drops out of (the first half of the reasoning behind) the big-picture view summarized in Holden Karnofsky's most important century series: to quote him, the long-run future would be radically unfamiliar and could come much faster than we think, simply because standard economic growth models imply that any technology that could fully automate innovation would cause an "economic singularity"; one such technology could be what Holden calls PASTA ("Process for Automating Scientific and Technological Advancement"). In What kind of AI? he elaborates (emphasis mine)
This is why I think it's basically justified to care about economy-growing automation of innovation as "the right working definition" from the x-risk reduction perspective for a funder like Open Phil in particular, which isn't what an AI researcher like Francois Chollet cares about. Which is fine, different folks care about different things. But calling the first definition "wrong" feels like the sort of mistake you make when you haven't at least good-faith effort tried to do what Scott suggested here with the first definition:
Note also that PASTA is a lot looser definitionally than the AGI defined in Metaculus' When will the first general AI system be devised, tested, and publicly announced? (2031 as of time of writing), which requires the sort of properties Chollet would probably approve (single unified software system, not a cobbled-together set of task-specialized subsystems), yet if the PASTA collective functionally completes the innovation -> resources -> PASTA -> innovation -> ... economic growth loop, that would already be x-risk relevant. The argument would then need to be "something like the Chollet's / Metaculus' definition is necessary to complete the growth loop", which would be a testable hypothesis.
This a really interesting way of looking at the issue!
But is PASTA really equivalent to "a system that can automate the majority of economically valuable work"? If it specifically is supposed to mean the automation of innovation, then that sounds closer to Chollet's definition of AGI to me: "a system that can efficiently acquire new skills and solve open-ended problems"