Part I has been linkposted before,  but I thought it was worth signal-boosting the second part separately. 

The link is to a transcript, but you can watch the interview on Youtube or listen wherever you get your podcasts (here are the Spotify and Apple Podcasts links)

 

Self-recommending, and the following quotes are all from Carl.

 

On the likelihood of forcible takeover: 

The answer I give will differ depending on the day. In the 2000s, before the deep learning revolution, I might have said 10% and part of it was that I expected there would be a lot more time for efforts to build movements, to prepare to better handle these problems in advance. But that was only some 15 years ago and we did not have 40 or 50 years as I might have hoped and the situation is moving very rapidly now. At this point depending on the day I might say one in four or one in five. [emphasis added]

 

On why Carl is not more pessimistic about the likelihood of forcible takeover:

[A] lot of that is driven by this intelligence explosion dynamic where our attempts to do alignment have to take place in a very, very short time window because if you have a safety property that emerges only when an AI has near human level intelligence, that's potentially deep into this intelligence explosion [...]

..[A]s we approach that kind of AI capability we're approaching that from weaker systems like these predictive models right now that are starting off with less situational awareness. [...]

An infinite-limit perfect-AI that can always figure out exactly when it would get caught and when it wouldn't might navigate that with a motivation of only conditional honesty or only conditional loyalties. But for systems that are limited in their ability to reliably determine when they can get away with things and when not including our efforts to actively construct those situations and including our efforts to use interpretability methods to create neural lie detectors. It's quite a challenging situation to develop those motives. 

We don't know when in the process those motives might develop and if the really bad sorts of motivations develop relatively later in the training process at least with all our countermeasures, then by that time we may have plenty of ability to extract AI assistance on further strengthening the quality of our adversarial examples, the strength of our neural lie detectors, the experiments that we can use to reveal and elicit and distinguish between different kinds of reward hacking tendencies and motivations. [emphasis added]

 

On why deception might be easier to detect than you'd expect:

...[I]f the AIs know they are misaligned ... it's pretty plausible they know that and they're thinking about ... when they would [ e.g. put backdoors into the code]

 So that means the knowledge is present in the system — Ah, here is the circumstance where I would try to do the takeover and this is how I would tell whether this is the time for the takeover or is this an elaborate test by the humans, an adversarial example.

 If gradient descent is working fine and there's any way to slightly jiggle the weights to make it more likely to deliver a more acceptable answer, we ask the AI: can you think of any circumstances where if we put a copy of you into that situation, it would then violate some of the various constraints? 

It would tell a lie, it would insert a backdoor into a program it wrote when it was not supposed to. If the model knows yeah, I would look for signs of consistency of the information I was being presented with, certain checksums failing could indicate that it was actually a constructed example, a simulation. 

...We could do something called relaxed adversarial training. If we can find how those things are represented within the AI, we can induce a hallucination that it has seen that those checksums have been passed. We can then ask, well does that cause the forbidden behavior? 

 

On learning from climate change:

Why might [coordination between governments on AI] fail? ... When science pins something down absolutely overwhelmingly then you can get to a situation where most people mostly believe it. Climate change was something that was a subject of scientific study for decades and gradually over time the scientific community converged on a quite firm consensus that human activity releasing carbon dioxide and other greenhouse gases was causing the planet to warm. We've had increasing amounts of action coming out of that. Not as much as would be optimal particularly in the most effective areas like creating renewable energy technology and the like. 

[...]

That's the kind of reason why I'm very enthusiastic about experiments and research that helps us to better evaluate the character of the problem in advance. Any resolution of that uncertainty helps us get better efforts in the possible worlds where it matters the most and hopefully we'll have that and it'll be a much easier epistemic environment. But the environment may not be that easy because deceptive alignment is pretty plausible.

 

On the median far-future outcome of AI:

... I think there's a lot of reason to expect that you would have significant diversity for something coming out of our existing diverse human society.

 

On whether Carl expects interest rates to rise in the coming years:

Yeah. So in the case we were talking about where this intelligence explosion happening in software to the extent that investors are noticing that, yeah they should be willing to lend money or make equity investments in these firms or demanding extremely high interest rates because if it's possible to turn capital into twice as much capital in a relatively short period and then more shortly after that, then yeah you should demand a much higher return [emphasis added]. Assuming there's competition among companies or coalitions for resources, whether that's investment or ownership of cloud compute. That would happen before you have so much investor cash making purchases and sales on this basis, you would first see it in things like the valuations of the AI companies, valuations of AI chip makers, and so far there have been effects.

 

On the Carl Shulman production function:

I've also had a very weird professional career that has involved a much much higher proportion than is normal of trying to build more comprehensive models of the world. That included being more of a journalist trying to get an understanding of many issues and many problems that had not yet been widely addressed but do a first pass and a second pass dive into them. 

[...]

My approach compared to some other people in forecasting and assessing some of these things, I try to obtain and rely on any data that I can find that is relevant. I try early and often to find factual information that bears on some of the questions I've got, especially in a quantitative fashion, do the basic arithmetic and consistency checks and checksums on a hypothesis about the world. Do that early and often. And I find that's quite fruitful and that people don't do it enough. 

Things like with the economic growth, just when someone mentions the diminishing returns, I immediately ask hmm, okay, so you have two exponential processes. What's the ratio between the doubling you get on the output versus the input? And find oh yeah, for computing and information technology and AI software it's well on the one side. There are other technologies that are closer to neutral. 

Whenever I can go from here's a vague qualitative consideration in one direction and here's a vague qualitative consideration in the other direction, I try and find some data, do some simple Fermi calculations, back of the envelope calculations and see if I can get a consistent picture of the world being one way or the world being another. 

I also try to be more exhaustive compared to some. I'm very interested in finding things like taxonomies of the world where I can go systematically through all of the possibilities. For example in my work with Open Philanthropy and previously on global catastrophic risks I wanted to make sure I'm not missing any big thing, anything that could be the biggest thing....

So I would do things like go through all of the different major scientific fields from anthropology to biology, chemistry, computer science, physics. What are the doom stories or candidates for big things associated within each of these fields... Go through all of the lists that people have made of threats of doom, search for previous literature of people who have done discussions and then yeah, have a big spreadsheet of what the candidates are. 

28

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities