RS

Rohin Shah

3753 karmaJoined May 2015

Bio

Hi, I'm Rohin Shah! I work as a Research Scientist on the technical AGI safety team at DeepMind. I completed my PhD at the Center for Human-Compatible AI at UC Berkeley, where I worked on building AI systems that can learn to assist a human user, even if they don't initially know what the user wants.

I'm particularly interested in big picture questions about artificial intelligence. What techniques will we use to build human-level AI systems? How will their deployment affect the world? What can we do to make this deployment go better? I write up summaries and thoughts about recent work tackling these questions in the Alignment Newsletter.

In the past, I ran the EA UC Berkeley and EA at the University of Washington groups.

http://rohinshah.com

Comments
437

Wait, you think the reason we can't do brain improvement is because we can't change the weights of individual neurons?

That seems wrong to me. I think it's because we don't know how the neurons work.

Did you read the link to Cold Takes above? If so, where do you disagree with it?

(I agree that we'd be able to do even better if we knew how the neurons work.)

Similarly I'd be surprised if you thought that beings as intelligent as humans could recursively improve NNs. Cos currently we can't do that, right?

Humans can improve NNs? That's what AI capabilities research is?

(It's not "recursive" improvement but I assume you don't care about the "recursive" part here.)

I think it's within the power of beings equally as intelligent as us (similarly as mentioned above I think recursive improvement in humans would accelerate if we had similar abilities).

I thought yes, but I'm a bit unhappy about that assumption (I forgot it was there). If you go by the intended spirit of the assumption (see the footnote) I'm probably on board, but it seems ripe for misinterpretation ("well if you had just deployed GPT-5 it really could have run an automated company, even though in practice we didn't do that because we were worried about safety and/or legal liability and/or we didn't know how to prompt it etc").

You could look at these older conversations. There's also Where I agree and disagree with Eliezer (see also my comment) though I suspect that won't be what you're looking for.

Mostly though I think you aren't going to get what you're looking for because it's a complicated question that doesn't have a simple answer.

(I think this regardless of whether you frame the question as "do we die?" or "do we live?", if you think the case for doom is straightforward I think you are mistaken. All the doom arguments I know of seem to me like they establish plausibility, not near-certainty, though I'm not going to defend that here.)

Would you be willing to put this in numerical form (% chance) as a rough expectation?

Idk, I don't really want to make claims about GPT-5 / GPT-6, since that depends on OpenAI's naming decisions. But I'm at < 5% (probably < 1%, but I'd want to think about it) on "the world will be transformed" (in the TAI sense) within the next 3 years.

Rohin Shah
2mo1410

First off, let me say that I'm not accusing you specifically of "hype", except inasmuch as I'm saying that for any AI-risk-worrier who has ever argued for shorter timelines (a class which includes me), if you know nothing else about that person, there's a decent chance their claims are partly "hype". Let me also say that I don't believe you are deliberately benefiting yourself at others' expense.

That being said, accusations of "hype" usually mean an expectation that the claims are overstated due to bias. I don't really see why it matters if the bias is survival motivated vs finance motivated vs status motivated. The point is that there is bias and so as an observer you should discount the claims somewhat (which is exactly how it was used in the original comment).

what do you make of Connor Leahy's take that LLMs are basically "general cognition engines" and will scale to full AGI in a generation or two (and with the addition of various plugins etc to aid "System 2" type thinking, which are freely being offered by the AutoGPT crowd)?

Could happen, probably won't, though it depends what is meant by "a generation or two", and what is meant by "full AGI" (I'm thinking of a bar like transformative AI).

(I haven't listened to the podcast but have thought about this idea before. I do agree it's good to think of LLMs as general cognition engines, and that plugins / other similar approaches will be a big deal.)

I don't yet understand why you believe that hardware scaling would come to grow at much higher rates than it has in the past.

If we assume innovations decline, then it is primarily because future AI and robots will be able to automate far more tasks than current AI and robots (and we will get them quickly, not slowly).

Imagine that currently technology A that automates area X gains capabilities at a rate of 5% per year, which ends up leading to a growth rate of 10% per year.

Imagine technology B that also aims to automate area X gains capabilities at a rate of 20% per year, but is currently behind technology A.

Generally, at the point when B exceeds A, I'd expect growth rates of X-automating technologies to grow from 10% to >20% (though not necessarily immediately, it can take time to build the capacity for that growth).

For AI, the area X is "cognitive labor", technology A is "the current suite of productivity tools", and technology B is "AI".

For robots, the area X is "physical labor", technology A is "classical robotics", and technology B is "robotics based on foundation models".


That was just assuming hardware scaling, and it justifies a growth in some particular growth rates, but not a growth explosion. If you add in the software efficiency, then I think you are just straightforwardly generating lots of innovations (what else is leading to the improved software efficiency?) and that's how you get the growth explosion, at least until you run out of software efficiency improvements to make.

I don't disagree with any of the above (which is why I emphasized that I don't think the scaling argument is sufficient to justify a growth explosion). I'm confused why you think the rate of growth of robots is at all relevant, when (general-purpose) robotics seem mostly like a research technology right now. It feels kind of like looking at the current rate of growth of fusion plants as a prediction of the rate of growth of fusion plants after the point where fusion is cheaper than other sources of energy.

(If you were talking about the rate of growth of machines in general I'd find that more relevant.)

I am confused by your argument against scaling.

My understanding of the scale-up argument is:

  1. Currently humans are state-of-the-art at various tasks relevant to growth.
  2. We are bottlenecked on scaling up humans by a variety of things (e.g. it takes ~20 years to train up a new human, you can't invest money into the creation of new humans with the hope of getting a return on it, humans only work ~8 hours a day)
  3. At some point AI / robots will be able to match human performance at these tasks.
  4. AI / robots will not be bottlenecked on those things.

In some sense I agree with you that you have to see efficiency improvements, but the efficiency improvements are things like "you can create new skilled robots in days, compared to the previous SOTA of 20 years". So I think if you accept (3) then I think you are already accepting massive efficiency improvements.

I don't see why current robot growth rates are relevant. When you have two different technologies A and B where A works better now, but B is getting better faster than A, then there will predictably be a big jump in the use of B once it exceeds A, and extrapolating the growth rates of B before it exceeds A is going to predictably mislead you.

(For example, I'd guess that in 1975, you would have done better thinking about how / when the personal computer would overtake other office productivity technologies, perhaps based on Moore's law, rather than trying to extrapolate the growth rate of personal computers. Indeed, according a random website I just found, it looks like the growth rate accelerated till the EDIT: 1980s, though it's hard to tell from the graph.)

(To be clear, this argument doesn't necessarily get you to "transformative impact on growth comparable to the industrial revolution", I'd guess you do need to talk about innovations to get that conclusion. But I'm just not seeing why you don't expect a ton of scaling even if innovations are rarer, unless you deny (3), but it mostly seems like you don't deny (3).)

“Hype” typically means Person X is promoting a product, that they benefit from the success of that product, and that they are probably exaggerating the impressiveness of that product in bad faith (or at least, with a self-serving bias). 

All of this seems to apply to AI-risk-worriers?

  • AI-risk-worriers are promoting a narrative that powerful AI will come soon
  • AI-risk-worriers are taken more seriously, have more job opportunities, get more status, get more of their policy proposals, etc, to the extent that this narrative is successful
  • My experience is that AI products are less impressive than the impression I would get from listening to AI-risk-worriers, and self-serving bias seems like an obvious explanation for this.

I generally agree that as a discourse norm you don't want to go around accusing people of bad faith, but as a matter of truth-seeking my best guess is that a substantial fraction of short-timelines amongst AI-risk-worriers is in fact "hype", as you've defined it.

Thanks for this, it's helpful. I do agree that declining growth rates is significant evidence for your view.

I disagree with your other arguments:

For one, an AI-driven explosion of this kind would most likely involve a corresponding explosion in hardware (e.g. for reasons gestured at here and here), and there are both theoretical and empirical reasons to doubt that we will see such an explosion.

I don't have a strong take on whether we'll see an explosion in hardware efficiency; it's plausible to me that there won't be much change there (and also plausible that there will be significant advances, e.g. getting 3D chips to work -- I just don't know much about this area).

But the central thing I imagine in an AI-driven explosion is an explosion in the amount of hardware (i.e. way more factories producing chips, way more mining operations getting the raw materials, etc), and an explosion in software efficiency (see e.g. here and here). So it just doesn't seem to matter that much if we're at the limits of hardware efficiency.

it is worth noting that the decline in growth rates that we have seen since the 1960s is not only due to decreasing population growth, as there are also other factors that have contributed, such as certain growth potentials that have been exhausted, and the consequent decline in innovations per capita.

I realize that I said the opposite in a previous comment thread, but right now it feels consistent with explosive growth to say that innovations per capita are going to decline; indeed I agree with Scott Alexander that it's hard to imagine it being any other way. The feedback loop for explosive growth is output -> people / AI -> ideas -> output, a core part of that feedback loop is about increasing the "population".

(Though the fact that I said the opposite in a previous comment thread suggests I should really delve into the math to check my understanding.)

(The rest of this is nitpicky details.)

Incidentally, the graphs you show for the decline in innovations per capita start dropping around 1900 (I think, I am guessing for the one that has "% of people" as its x-axis), which is pretty different from the 1960s.

Also, I'm a bit skeptical of the graph showing a 5x drop. It's based off of an analysis of a book written by Asimov in 1993 presenting a history of science and discovery. I'm pretty worried that this will tend to disadvantage the latest years, because (1) there may have been discoveries that weren't recognized as "meeting the bar", since their importance hadn't yet been understood, (2) Asimov might not have been aware of the most recent discoveries (though this point could also go the other way), and (3) as time goes on, discoveries become more and more diverse (there are way more fields) and hard to understand without expertise, and so Asimov might not have noticed them.

Load more