I found this interview with Francois Chollet fascinating, and would be curious to hear what other people make of it.

I think it is impressive that he's managed to devise a benchmark of tasks which are mostly pretty easy for most humans, but which LLMs have so far not been able to make much progress with.

If you don't have time to watch the video, then I think these tweets of his sum up his views quite well:

The point of general intelligence is to make it possible to deal with novelty and uncertainty, which is what our lives are made of. Intelligence is the ability to improvise and adapt in the face of situations you weren't prepared for (either by your evolutionary history or by your past experience) -- to efficiently acquire skills at novel tasks, on the fly.

Meanwhile what the AI of today does is to combine extremely weak generalization power (i.e. ability to deal with novelty and uncertainty) with a dense sampling of everything it might ever be faced with -- essentially, use brute-force scale to *by-pass* the problem of intelligence entirely.

If intelligence is the ability to deal with what you weren't prepared for, then the modern AI strategy is to prepare for everything, so you never need intelligence. This is of course a terrible strategy, because it is impossible to prepare for everything. The problem isn't just scale, the problem is the fact that the real world isn't sampled from a static distribution -- it is ever changing and ever novel.

If his take on things is correct, I am not sure exactly what this implies for AGI timelines. Maybe it would mean that AGI is much further off than we think, because the impressive feats of LLMs that have led us to think it might be close have been overinterpreted. But it seems like it could also mean that AGI will arrive much sooner? Maybe we already have more than enough compute and training data for superhuman AGI, and we are just waiting on that one clever idea. Maybe that could happen tomorrow?

37

1
0

Reactions

1
0
Comments23
Sorted by Click to highlight new comments since:

Thanks for sharing toby, I had just finished listening to the podcast and was about to share it here but it turns out you beat me to it! I think I'll do a post going into the interview (Zvi-style)[1] and bringing up the most interesting points and cruxes, and why the ARC Challenge matters. To quickly give my thoughts on some of the things you bring up:

  • The ARC Challenge is the best benchmark out their imo, and it's telling that labs don't release their scores on it. Chollet says in the interview that they test it but because they score badly, the don't release them.
  • On timelines, Chollet says that OpenAI's success led the field to 1) stop sharing Frontier research and 2) make the field focus on LLMs alone, thereby setting back timelines to AGI. I'd also suggest that the 'AGI in 2/3 years' claims don't make much sense to me unless you take an LLMs+scaling maximalist perspective.

And to respond to some other comments here:

  • To huw, I think the AI Safety field is mixed. The original perspective was that ASI would be like an AIXI model, but the success of transformers have changed that. Existing models and dependents could be economically damaging, but taking away the existential risk undermines the astronomical value of AI Safety from an EA perspective.
  • To OCB, I think we just disagree about how far LLMs are away from this. i think less that ARC is 'neat' and more that it shows a critical failure model in the LLM paradigm. In the interview Chollet argues that the 'scaffolding' is actually the hard part of reasoning, and I agree with him.
  • To Mo, I guess Chollet's perspective would be that you need 'open-endedness' to be able to automate many/most work? A big crux I think here is whether 'PASTA' is possible at all, or at least whether it can be used as a way to bootstrap everything else. I'm more of the perspective that science is probably the last thing that can possibly be automated, but that might depend on your definition of science. 
    • I'm quite sceptical of Davidson's work, and probably Karnofsky's, but I'll need to revisit them in detail to treat them fairly. 
    • The Metaculus AGI markets are, to me, crazy low. In both cases the resolution criteria are some LLM unfriendly, it seems that people are more going off 'vibes' and not reading the fine print. Right now, for instance, any OpenAI model will be easily discovered in a proper imitation game by asking it to do something that violates the terms of service.

I'll go into more depth on my follow-up post, and I'll edit this bit of my comment wiht a link once I'm done.

  1. ^

    In style only, I make no claims as to quality

A big crux I think here is whether 'PASTA' is possible at all, or at least whether it can be used as a way to bootstrap everything else. 

Do you mean “possible at all using LLM technology” or do you mean “possible at all using any possible AI algorithm that will ever be invented”?

As for the latter, I think (or at least, I hope!) that there’s wide consensus that whatever human brains do (individually and collectively), it is possible in principle for algorithms-running-on-chips to do those same things too. Brains are not magic, right?

As for the latter, I think (or at least, I hope!) that there’s wide consensus that whatever human brains do (individually and collectively), it is possible in principle for algorithms-running-on-chips to do those same things too. Brains are not magic, right?

I think this is probably true, but I wouldn't be 100% certain about it. Brains may not be magic, but they are also very different physical entities to silicon chips, so there is no guarantee that the function of one could be efficiently emulated by the other. There could be some crucial aspect of the mind relying on a physical process which would be computationally infeasible to simulate using binary silicon transistors. 

If there are any neuroscientists who have investigated this I would be interested!

OK yeah, “AGI is possible on chips but only if you have 1e100 of them or whatever” is certainly a conceivable possibility. :) For example, here’s me responding to someone arguing along those lines.

If there are any neuroscientists who have investigated this I would be interested!

There is never a neuroscience consensus but fwiw I fancy myself a neuroscientist and have some thoughts at: Thoughts on hardware / compute requirements for AGI.

One of various points I bring up is that:

  • (1) if you look at how human brains, say, go to the moon, or invent quantum mechanics, and you think about what algorithms could underlie that, then you would start talking about algorithms that entail building generative models, and editing them, and querying them, and searching through them, and composing them, blah blah.
  • (2) if you look at a biological brain’s low-level affordances, it’s a bunch of things related to somatic spikes and dendritic spikes and protein cascades and releasing and detecting neuropeptides etc.
  • (3) if you look at silicon chip’s low-level affordances, it’s a bunch of things related to switching transistors and currents going down wires and charging up capacitors and so on.

My view is: implementing (1) via (3) would involve a lot of inefficient bottlenecks where there’s no low-level affordance that’s a good match to the algorithmic operation we want … but the same is true of implementing (1) via (2). Indeed, I think the human brain does what it does via some atrociously inefficient workarounds to the limitations of biological neurons, limitations which would not be applicable to silicon chips.

By contrast, many people thinking about this problem are often thinking about “how hard is it to use (3) to precisely emulate (2)?”, rather than “what’s the comparison between (1)←(3) versus (1)←(2)?”. (If you’re still not following, see my discussion here—search for “transistor-by-transistor simulation of a pocket calculator microcontroller chip”.)

Another thing is that, if you look at what a single consumer GPU can do when it runs an LLM or diffusion model… well it’s not doing human-level AGI, but it’s sure doing something, and I think it’s a sound intuition (albeit hard to formalize) to say “well it kinda seems implausible that the brain is doing something that’s >1000× harder to calculate than that”.

Thanks for those links, this is an interesting topic I may look into more in the future. 

Another thing is that, if you look at what a single consumer GPU can do when it runs an LLM or diffusion model… well it’s not doing human-level AGI, but it’s sure doing something, and I think it’s a sound intuition (albeit hard to formalize) to say “well it kinda seems implausible that the brain is doing something that’s >1000× harder to calculate than that”.

It doesn't seem that implausible to me. In general I find the computational power required for different tasks (such as what I do in computational physics) frequently varies by many orders of magnitude. LLMs get to their level of performance by sifting through all the data on the internet, something we can't do, and yet still perform worse than a regular human on many tasks, so clearly theres a lot of extra something going on here. It actually seems kind of likely to me that what the brain is doing is more than 3 orders of magnitude more difficult. 

I don't know enough to be confident on any of this, but If AGI turns out to be impossible on silicon chips with earths resources, I would be surprised but not totally shocked. 
 

Yeah I definitely don't mean 'brains are magic', humans are generally intelligent by any meaningful definition of the words, so we have an existence proof there that it is possible to be instantiated in some form.

I'm more sceptical of thinking science can be 'automated' though - I think progressing scientific understanding of the world is in many ways quite a creative and open-ended endeavour. It requires forming beliefs about the world, updating them due to evidence, and sometimes making radical new shifts. It's essentially the epistemological frame problem, and I think we're way off a solution there.

I think I have a big similar crux with Aschenbrenner when he says things like "automating AI research is all it takes" - like I think I disagree with that anyway but automating AI research is really, really hard! It might be 'all it takes' because that problem is already AGI complete!

I’m confused what you’re trying to say… Supposing we do in fact invent AGI someday, do you think this AGI won’t be able to do science? Or that it will be able to do science, but that wouldn’t count as “automating science”?

Or maybe when you said “whether 'PASTA' is possible at all”, you meant “whether 'PASTA' is possible at all via future LLMs”?

Maybe you’re assuming that everyone here has a shared assumption that we’re just talking about LLMs, and that if someone says “AI will never do X” they obviously means “LLMs will never do X”? If so, I think that’s wrong (or at least I hope it’s wrong), and I think we should be more careful with our terminology. AI is broader than LLMs. …Well maybe Aschenbrenner is thinking that way, but I bet that if you were to ask a typical senior person in AI x-risk (e.g. Karnofsky) whether it’s possible that there will be some big AI paradigm shift (away from LLMs) between now and TAI, they would say “Well yeah duh of course that’s possible,” and then they would say that they would still absolutely want to talk about and prepare for TAI, in whatever algorithmic form it might take.

Apologies for not being clear! I'll try and be a bit more clear here, but there's probably a lot of inferential distance here and we're covering some quite deep topics:

Supposing we do in fact invent AGI someday, do you think this AGI won’t be able to do science? Or that it will be able to do science, but that wouldn’t count as “automating science”?

Or maybe when you said “whether 'PASTA' is possible at all”, you meant “whether 'PASTA' is possible at all via future LLMs”?

So on the first section, I'm going for the latter and taking issue with the term 'automation', which I think speaks to mindless, automatic process of achieving some output. But if digital functionalism were true, and we successful made a digital emulation of a human who contributed to scientific research, I wouldn't call that 'automating science', instead we would have created a being that can do science. That being would be creative, agentic, with the ability to formulate it's own novel ideas and hypotheses about the world. It'd be limited by its ability to sample from the world, design experiments, practice good epistemology, wait for physical results etc. etc. It might be the case that some scientific research happens quickly, and then subsequent breakthroughs happen more slowly, etc.

My opinions on this are also highly influenced by the works of Deutsch and Popper too, who essentially argue that the growth of knowledge cannot be predicted, and since science is (in some sense) the stock of human knowledge, and since what cannot be predicted cannot be automated, scientific 'automation' is in some sense impossible.

Maybe you’re assuming that everyone here has a shared assumption that we’re just talking about LLMs...but I bet that if you were to ask a typical senior person in AI x-risk (e.g. Karnofsky) whether it’s possible that there will be some big AI paradigm shift (away from LLMs) between now and TAI, they would say “Well yeah duh of course that’s possible,” and then they would say that they would still absolutely want to talk about and prepare for TAI, in whatever algorithmic form it might take.

Agreed, AI systems are larger than LLMs, and maybe I was being a bit loose with language. On the whole though, I think much of the case by proponents for the importance of working on AI Safety does assume that current paradigm + scale is all you need, or rest on works that assume it. For instance, Davidson's Compute-Centric Framework model for OpenPhil states right in that opening page:

In this framework, AGI is developed by improving and scaling up approaches within the current ML paradigm, not by discovering new algorithmic paradigms. 

And I get off the bus with this approach immediately because I don't think that's plausible.

As I said in my original comment, I'm working on a full post on the discussion between Chollet and Dwarkesh, which will hopefully make the AGI-sceptical position I'm coming from a bit more clear. If you end up reading it, I'd be really interested in your thoughts! :)

On the whole though, I think much of the case by proponents for the importance of working on AI Safety does assume that current paradigm + scale is all you need, or rest on works that assume it.

Yeah this is more true than I would like. I try to push back on it where possible, e.g. my post AI doom from an LLM-plateau-ist perspective.

There were however plenty of people who were loudly arguing that it was important to work on AI x-risk before “the current paradigm” was much of a thing (or in some cases long before “the current paradigm” existed at all), and I think their arguments were sound at the time and remain sound today. (E.g. Alan Turing, Norbert Weiner, Yudkowsky, Bostrom, Stuart Russell, Tegmark…) (OpenPhil seems to have started working seriously on AI in 2016, which was 3 years before GPT-2.)

Thanks for your interesting thoughts on this!

On the timelines question, I know Chollet argues AGI is further off than a lot of people think, and maybe his views do imply that in expectation, but it also seems to me like his views introduce higher variance into the prediction, and so would also allow for the possibility of much more rapid AGI advancement than the conventional narrative does.

If you think we just need to scale LLMs to get to AGI, then you expect things to happen fast, but probably not that fast. Progress is limited by compute and by data availability.

But if there is some crucial set of ideas yet to be discovered, then that's something that could change extremely quickly. We're potentially just waiting for someone to have a eureka moment. And we'd be much less certain what exactly was possible with current hardware and data once that moment happens. Maybe we could have superhuman AGI almost overnight?

I do think this set of benchmarks is neat.

My best guess, however, is that we're within spitting distance[1] of scaffolded LLMs being able to solve these. (Unscaffolded LLMs would I think be way off.)

  1. ^

    What I really mean by this is something like "gee it really seems like you should be able to do this already with good enough scaffolding". Then my actual timeline for that to turn into a real system that someone's built which has done it is uncertain, and plausible values range from "it's already happened" to "it takes two or three more years".

The ARC Prize website takes this definitional stance on AGI:

Consensus but wrong:

AGI is a system that can automate the majority of economically valuable work.

Correct:

AGI is a system that can efficiently acquire new skills and solve open-ended problems.

Something like the former definition, central to reports like Tom Davidson's CCF-based takeoff speeds for Open Phil, basically drops out of (the first half of the reasoning behind) the big-picture view summarized in Holden Karnofsky's most important century series: to quote him, the long-run future would be radically unfamiliar and could come much faster than we think, simply because standard economic growth models imply that any technology that could fully automate innovation would cause an "economic singularity"; one such technology could be what Holden calls PASTA ("Process for Automating Scientific and Technological Advancement"). In What kind of AI? he elaborates (emphasis mine)

I mean PASTA to refer to either a single system or a collection of systems that can collectively do this sort of automation. ...

By talking about PASTA, I'm partly trying to get rid of some unnecessary baggage in the debate over "artificial general intelligence." I don't think we need artificial general intelligence in order for this century to be the most important in history. Something narrower - as PASTA might be - would be plenty for that. ...

I don't particularly expect all of this to happen as part of a single, deliberate development process. Over time, I expect different AI systems to be used for different and increasingly broad tasks, including and especially tasks that help complement human activities on scientific and technological advancement. There could be many different types of AI systems, each with its own revenue model and feedback loop, and their collective abilities could grow to the point where at some point, some set of them is able to do everything (with respect to scientific and technological advancement) that formerly required a human.

This is why I think it's basically justified to care about economy-growing automation of innovation as "the right working definition" from the x-risk reduction perspective for a funder like Open Phil in particular, which isn't what an AI researcher like Francois Chollet cares about. Which is fine, different folks care about different things. But calling the first definition "wrong" feels like the sort of mistake you make when you haven't at least good-faith effort tried to do what Scott suggested here with the first definition: 

... if you're looking into something controversial, you might have to just read the biased sources on both sides, then try to reconcile them.

Success often feels like realizing that a topic you thought would have one clear answer actually has a million different answers depending on how you ask the question. You start with something like "did the economy do better or worse this year?", you find that it's actually a thousand different questions like "did unemployment get better or worse this year?" vs. "did the stock market get better or worse this year?" and end up with things even more complicated like "did employment as measured in percentage of job-seekers finding a job within six months get better" vs. "did employment as measured in total percent of workforce working get better?". Then finally once you've disentangled all that and realized that the people saying "employment is getting better" or "employment is getting worse" are using statistics about subtly different things and talking past each other, you use all of the specific things you've discovered to reconstruct a picture of whether, in the ways important to you, the economy really is getting better or worse.

Note also that PASTA is a lot looser definitionally than the AGI defined in Metaculus' When will the first general AI system be devised, tested, and publicly announced? (2031 as of time of writing), which requires the sort of properties Chollet would probably approve (single unified software system, not a cobbled-together set of task-specialized subsystems), yet if the PASTA collective functionally completes the innovation -> resources -> PASTA -> innovation -> ... economic growth loop, that would already be x-risk relevant. The argument would then need to be "something like the Chollet's / Metaculus' definition is necessary to complete the growth loop", which would be a testable hypothesis.

This a really interesting way of looking at the issue!

But is PASTA really equivalent to "a system that can automate the majority of economically valuable work"? If it specifically is supposed to mean the automation of innovation, then that sounds closer to Chollet's definition of AGI to me: "a system that can efficiently acquire new skills and solve open-ended problems"

I was under the impression that most people in AI safety felt this way—that transformers (or diffusion models) weren't going to be the major underpinning of AGI. As has been noted a lot, they're really good at achieving human-level performance in most tasks, particularly with more data & training, but that they can't generalise well and are hence unlikely to be the 'G' in AGI. Rather:

  1. Existing models will be economically devastating for large sections of the economy anyway
  2. The rate of progress across multiple domains of AI is concerning, and that the increased funding to AI more generally will flow back to new development domains
  3. Even if neither of these things are true, we still want to advocate for increased controls around the development of future architectures

But please forgive me if I had the wrong impression here.

I was under the impression that most people in AI safety felt this way—that transformers (or diffusion models) weren't going to be the major underpinning of AGI.

I haven’t done any surveys or anything, but that seems very inaccurate to me. I would have guessed that >90% of “people in AI safety” are either strongly expecting that transformers (or diffusion models) will be the major underpinning of AGI, or at least they’re acting as if they strongly expect that. (I’m including LLMs + scaffolding and so on in this category.)

For example: people seem very happy to make guesses about what tasks the first AGIs will be better and worse at doing based on current LLM capabilities; and people seem very happy to make guesses about how much compute the first AGIs will require based on current LLM compute requirements; and people seem very happy to make guesses about which companies are likely to develop AGIs based on which companies are best at training LLMs today; and people seem very happy to make guesses about AGI UIs based on the particular LLM interface of “context window → output token”; etc. etc. This kind of thing happens constantly, and sometimes I feel like I’m the only one who even notices. It drives me nuts.

Is that just a kind of availability bias—in the 'marketplace of ideas' (scare quotes) they're competing against pure speculation about architecture & compute requirements, which is much harder to make estimates around & generally feels less concrete?

Yeah sure, here are two reasonable positions:

  • (A) “We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is very likely what’s gonna happen.”
  • (B) We should plan for the contingency where LLMs (or scaffolded LLMs etc.) scale to AGI, because this contingency is more tractable and urgent than the contingency where they don’t, and hence worth working on regardless of its exact probability.”

I think plenty of AI safety people are in (A), which is at least internally-consistent even if I happen to think they’re wrong. I also think there are also lots of AI safety people who would say that they’re in (B) if pressed, but where they long ago lost track of the fact that that’s what they were doing and instead they’ve started treating the contingency as a definite expectation, and thus they say things that omit essential caveats, or are wrong or misleading in other ways. ¯\_(ツ)_/¯

I'm not sure how widespread that view is, but I think that it is likely mistaken. (And I guess the view is most likely linked to thinking about language models without scaffolding built on top of them?)

Moreover from a safety perspective, I think it's pretty important that agents built via scaffolding on top of language models have a strong natural transparency, which may make it one of the most desirable possible regimes to obtain general intelligence with.

(I wrote more about this here.)

Thanks for the post! My quick thoughts:

> I'm not hugely worried about today's LLMs causing x-risk.
> I do think they could cause catastrophic harm in the hands of bioterrorists, but that's about it 
> I am going to basically shit my pants when an AI agent can,
1/ take a brief from me for a brand new tv,
2/ have it be delivered to my home on time, on spec and on budget,
3/ have also organised installation by a technician
4/ all while I'm out of the loop after step 1

Seems doable most of the time in the best future, but the failure rate will likely be high enough that people wouldn’t want to use it for a while.

The tasks thematically seem like Raven's Progressive Matrices (a non-verbal test of intelligence where a pattern is established and you have to apply it to a new exemplar), link here

There's a big issue with semantics.

Most people would agree that an AI that automates ~99% of remote jobs is AGI, but Chollet disagrees because the AI might be "just memorizing".

The key thing is whether scaling will be enough to automate AI research, but unfortunately, they don't discuss this in the podcast.

I've played/designed a lot of induction puzzles, and I think that the thing Chollet dismissively calls "memorization" might actually be all the human brain is doing when we develop the capacity to solve them. If so, there's a some possibility that the first real world transformative AGI will be ineligible for the prize.

Curated and popular this week
Relevant opportunities