Hide table of contents

We just published an interview: Lennart Heim on the compute governance era and what has to come after. You can click through for the audio, a full transcript, and related links. Below are the episode summary and some key excerpts.

Episode summary

People think [these controls are] about chips which go into tanks or rockets. No, those are not the chips which go on tanks or rockets. These chips in the tanks and rockets are closer to one in your washing machine; it’s not that sophisticated. It’s different if you try to calculate a trajectory of a missile or something: then you do it on supercomputers, and maybe the chips are closer.

We’re talking about the chips which are used in data centres for AI training.

Lennart Heim

As AI advances ever more quickly, concerns about potential misuse of highly capable models are growing. From hostile foreign governments and terrorists to reckless entrepreneurs, the threat of AI falling into the wrong hands is top of mind for the national security community.

With growing concerns about the use of AI in military applications, the US has banned the export of certain types of chips to China.

But unlike the uranium required to make nuclear weapons, or the material inputs to a bioweapons programme, computer chips and machine learning models are absolutely everywhere. So is it actually possible to keep dangerous capabilities out of the wrong hands?

In today’s interview, Lennart Heim — who researches compute governance at the Centre for the Governance of AI — explains why limiting access to supercomputers may represent our best shot.

As Lennart explains, an AI research project requires many inputs, including the classic triad of compute, algorithms, and data.

If we want to limit access to the most advanced AI models, focusing on access to supercomputing resources — usually called ‘compute’ — might be the way to go. Both algorithms and data are hard to control because they live on hard drives and can be easily copied. By contrast, advanced chips are physical items that can’t be used by multiple people at once and come from a small number of sources.

According to Lennart, the hope would be to enforce AI safety regulations by controlling access to the most advanced chips specialised for AI applications. For instance, projects training ‘frontier’ AI models — the newest and most capable models — might only gain access to the supercomputers they need if they obtain a licence and follow industry best practices.

We have similar safety rules for companies that fly planes or manufacture volatile chemicals — so why not for people producing the most powerful and perhaps the most dangerous technology humanity has ever played with?

But Lennart is quick to note that the approach faces many practical challenges. Currently, AI chips are readily available and untracked. Changing that will require the collaboration of many actors, which might be difficult, especially given that some of them aren’t convinced of the seriousness of the problem.

Host Rob Wiblin is particularly concerned about a different challenge: the increasing efficiency of AI training algorithms. As these algorithms become more efficient, what once required a specialised AI supercomputer to train might soon be achievable with a home computer.

By that point, tracking every aggregation of compute that could prove to be very dangerous would be both impractical and invasive.

With only a decade or two left before that becomes a reality, the window during which compute governance is a viable solution may be a brief one. Top AI labs have already stopped publishing their latest algorithms, which might extend this ‘compute governance era’, but not for very long.

If compute governance is only a temporary phase between the era of difficult-to-train superhuman AI models and the time when such models are widely accessible, what can we do to prevent misuse of AI systems after that point?

Lennart and Rob both think the only enduring approach requires taking advantage of the AI capabilities that should be in the hands of police and governments — which will hopefully remain superior to those held by criminals, terrorists, or fools. But as they describe, this means maintaining a peaceful standoff between AI models with conflicting goals that can act and fight with one another on the microsecond timescale. Being far too slow to follow what’s happening — let alone participate — humans would have to be cut out of any defensive decision-making.

Both agree that while this may be our best option, such a vision of the future is more terrifying than reassuring.

Lennart and Rob discuss the above as well as:

  • How can we best categorise all the ways AI could go wrong?
  • Why did the US restrict the export of some chips to China and what impact has that had?
  • Is the US in an ‘arms race’ with China or is that more an illusion?
  • What is the deal with chips specialised for AI applications?
  • How is the ‘compute’ industry organised?
  • Downsides of using compute as a target for regulations
  • Could safety mechanisms be built into computer chips themselves?
  • Who would have the legal authority to govern compute if some disaster made it seem necessary?
  • The reasons Rob doubts that any of this stuff will work
  • Could AI be trained to operate as a far more severe computer worm than any we’ve seen before?
  • What does the world look like when sluggish human reaction times leave us completely outclassed?
  • And plenty more

Get this episode by subscribing to our podcast on the world’s most pressing problems and how to solve them: type ‘80,000 Hours’ into your podcasting app. Or read the transcript below.

Producer: Keiran Harris
Audio mastering: Milo McGuire, Dominic Armstrong, and Ben Cordell
Transcriptions: Katy Moore

Highlights

Is it possible to enforce compute regulations?

Rob Wiblin: Do you think it would be practical to be able to restrain people from aggregating this amount of compute before they do it?

Lennart Heim: Um… Open question. Put it this way: I think what makes me positive is we’re talking about ordering more than 1,000 chips for a couple of months. That’s like less than 100 actors, probably less than 30 actors, in the world who are doing this, right? We have training runs where we talk about their cost within the single-digit millions here.

And is it then possible to enforce this? I think eventually we’d just maybe start it voluntarily — and I think a bunch of AGI labs would eventually sign up to this, because they have currently shown some interest in this, like, “Hey, we want to be responsible; here’s a good way of doing it.” And one way of enforcing it is via the cloud providers — then I don’t need to talk to all the AGI labs; I only need to talk to all the compute providers. I want to some degree a registry of everybody who has more than 5,000 chips sitting somewhere, and then knowing who’s using a lot of these chips, and maybe for what. You could imagine this in the beginning maybe being voluntary, maybe later enforced by these cloud providers.

But of course, there are many open questions regarding how you eventually enforce this, in particular getting these insights. Whole cloud providers are built around the notion that they don’t know what you’re doing on their compute. That’s the reason why, for example, Netflix is not having their own data centres; they’re using it from Amazon Web Services — even though Amazon, with Amazon Prime, is a direct competitor. But they’re just like, “Yeah, we can do this because you don’t have any insights into this workload anyways, because we believe in encryption.” And Amazon’s like, “Yeah, seems good. We have this economy of scale. Please use our compute.” Same with Apple: they just use a bunch of data centres from Amazon and others, even though they are in direct competition with them.

So there’s little insight there. The only insight you eventually have is how many chips for how many hours, because that’s how you build them. And I think this already gets you a long way.

Rob Wiblin: How big an issue would it be that, if the US puts in place rules like this, you just go overseas and train your model somewhere else?

Lennart Heim: Yeah, but maybe the export controls could just make sure that these chips never go overseas, or don’t go in countries where we don’t trust that people will enforce this. This is another way we can think about it: maybe the chips only go to allies of the US, where they eventually also can enforce these controls. And then the US can enforce this, particularly with all of the allies within a semiconductor supply chain, to just make sure, like, “We have these new rules; how are we going to use AI chips responsibly?” — and you only get these chips if you follow these rules. That’s one way of going about it. Otherwise the chips are not going to go there.

Safe havens of AI compute

Lennart Heim: I think it’s a key thing that I’m sometimes doing when I’ve talked to policymakers: they just always love hardware. It’s like, “Great, it’s just going to work.” I was like, “No, actually, stuff is not secure. Have you seen the state of cybersecurity? It’s terrible.” And that an iPhone is as secure as it is right now, this is years of work. And I think there’s probably right now an exploit out there for listening on iPhone conversations, but it just costs you $100 million — so you only do it for literally really big targets, and not the random hacker on the street does it.

I think it’s really important that whenever you need to reinvent the wheel regarding security, just don’t do it. There’s this thing — “never roll your own crypto” — just don’t do it. Use existing libraries. Never roll your own crypto; use existing libraries to do this kind of stuff. I think the same here: I mostly want to use existing mechanisms. I think there’s still some research to be done, and this is part of the reason why we want to roll it out as fast as possible.

Another way to think about this is that a lot of these mechanisms rely on, well, you can hack them because you have physical access to compute. We have not talked about it a lot yet, but compute does not need to sit under your desk to use it: you can just use cloud computing. I can right now access any compute in the world if people wanted me to. This may be useful if I implement something which is in hardware, and you need to physically tamper with the hardware: you can’t; you’re only accessing it virtually, right? And even if you would tamper with it via software, guess what? After your rental contract runs out, we’re just going to reset the whole hardware. We reflash the firmware, we have some integrity checks, and here we go, here we are again.

So maybe to build on top of this, we previously talked about the semiconductor supply chain. People should think about the compute supply chain, which goes one step further. At some point your chips go somewhere, and the chips most of the time sit in large data centres owned by big cloud providers. So we definitely see that most AI labs right now are either a cloud provider or they partner with a cloud provider. So if we then think about choke points, guess what? Cloud is another choke point. This is a really nice way to restrict access, because right now I can give you access, you can use it — and if you start doing dangerous shit or I’m getting worried about it, I can just cut it off any single time.

This is not the same with hardware. Once the chip left the country and it’s going somewhere, I’m having a way harder time. So maybe the thing you want to build is like some safe havens of AI compute — where you enable these mechanisms that we just talked about there; you can be way more sure to actually work; and even if somebody misuses it, at a minimum, you can then cut off the excess for these kinds of things. So the general move towards cloud computing — which I think is happening anyways because of the economy of scale — is probably favourable from a governance point of view, where you can just intervene and make sure this is used in a more responsible manner.

Rob Wiblin: Yeah, OK. So this is kind of an exciting and interesting point, that people or many organisations currently have physical custody of the chips that they’re using for computing purposes. If we came to think that any aggregation of significant amounts of compute was just inherently very dangerous for humanity, then you could have a central repository, where only an extremely trusted group had custody. I guess it probably would be some combination of a company that’s running this and the government also overseeing it — as you might get with, I suppose, private contractors who are producing nuclear missiles or something like that — and then you could basically provide compute to everyone who wants it.

And I suppose for civil liberties reasons, you would maybe want to have some restrictions on the amount of oversight that you’re getting. You’d have some balancing act here between wanting to not intervene on what people can do on computers, but also needing to monitor to detect dangerous things. That could be quite a challenging balancing act. But basically it is, in principle, possible in the long term to prevent large numbers of groups from having physical custody of enormous numbers of chips — and indeed it might be more economical for most people, for most good actors, to not have physical custody anyway; that they would rather do it through a cloud computing provider. Which then creates a very clear node where probably these hardware-enabled mechanisms really could flourish, because it would be so much harder to tamper with them.

Lennart Heim: Yeah, maybe you don’t even need them there because you just have somebody who’s running it. And we definitely see a strong pivot towards the cloud. And no AI lab is having the AI service they eventually need in a basement to run these systems. They’re sitting elsewhere. They’re sitting somewhere close to a bunch of power, a bunch of water, to run these systems. And if you could just make these facilities more secure, run them responsibly, I think this might be just a pretty exciting point to go to.

You could even think about the most extreme example as a compute bank, right? We had a similar idea with nuclear fuel: just build a nuclear fuel bank. And here we just have a compute bank: there’s a bunch of data centres in the world, we manage them internationally via a new agency, and we manage access to this. And maybe again here we mostly wanted to talk about the frontier AI systems — like the big, big systems — you eventually then just want to make sure they are developed in a responsible and safe manner there.

Are we doomed to become irrelevant?

Rob Wiblin: OK, so: what are we going to do? You were starting to raise this issue of offence/defence balance, where you’re saying that maybe this compute stuff is not going to cut it forever; now we need to start thinking about a different approach. And that approach might be that, sure, the amateur on their home computer, or the small business, might be able to train quite powerful models, but we should still expect that enormous internet giants like Google or authorities like the US government should have substantially better models. Even if it’s impressive what I can access on my home computer, there’s no way that I’m going to have access to the best, by any stretch of the imagination.

So how might we make things safer on a more sustainable basis? Perhaps what we need is to use that advantage that the large players — hopefully the more legitimate and hopefully the well-intentioned players — have, in order to monitor what everyone else is doing or find some way to protect against the harmful effects that you might get from mass proliferation to everyone.

Maybe this does sound crazy to people, or maybe it doesn’t, but I feel like what we’re really talking about here is having models that are constantly vigilant. I guess I’ve been using the term “sentinel AIs” that are monitoring everything that’s happening on the internet and can spring into action whenever they notice that someone — whether it be an idiot or a joker or a terrorist or another state or a hostile state or something — is beginning to do something really bad with their AIs, and prevent it. Hopefully relying on the fact that the cutting-edge model that the US government has is far above what it’s going to be competing with.

But this is a world, Lennart, in which humans are these kind of irrelevant, fleshy things that can’t possibly comprehend the speed at which these AI combatants are acting. They just have this autonomous standoff/war with one another across the Earth… while we watch on and hope that the good guys win.

Sorry, that was an extremely long comment for me, but am I understanding this right?

Lennart Heim: I mean, we are speculating here about the future. So are we right? I don’t know. I think we’re pointing to a scenario which eventually we can imagine, right? And I’m having a hard time telling you the exact answers, particularly just AI governance or my research is like “stop access forever” or something. I think that’s a really high burden to eventually fulfil.

What I’m pointing out is that we have this access effect, but we need to think about the defence capabilities here, in particular if you think about regulating the frontier. And I think this is part of what makes me a bit more optimistic. You’ve just described one scenario where we have these AI defender systems and they’re fighting, they’re just doing everything. Maybe this works well, we can just enjoy each other, right? Like, having a good time and it seems great. But maybe it’s also just more manual; I think it’s not really clear to me.

But I think the other aspect to keep in mind here is we’re talking about — let’s just say this is a GPT-6 system, everybody can train it, whatever — this future system, maybe the system is, again, not dangerous. Maybe there’s going to be a change in the game — again, where we talk go from this 89% to this 90% or something along these lines — which makes a big difference in capabilities, right? This dynamic gives the defender a big advantage there. Maybe people don’t even have an interest in using all of these systems, because the other systems are just way better.

We now think about malicious actors who are trying to do this. I would expect the majority of people not wanting to do this. Those are problems you already have right now, where people can just buy guns. And this goes wrong a lot of times, but it’s not like every second person in the world wants to buy guns and do terrible things with it. Maybe that’s the same with these kinds of futures. Maybe then these defender systems are just sufficient to eventually fight these kinds of things off, in particular if you have good compute monitoring, just general data centre monitoring regimes in place there.

What’s important here to think about is that compute has been doubling every six months. This might not continue forever, or this might continue for a long time. And all the other aspects which reduce the compute threshold have not been growing that fast. So again, all I’m saying is it buys us a couple of more years, right? Like more than this 10, 20, 30. Maybe that’s what I’m pointing to.

But overall, what we try to do with AI governance is like: yeah, AI is coming, this might be a really really big deal. It will probably be a really big deal. And we need to go there in a sane, sensible, well-managed way with these institutions. And there are many open questions, as you just outlined, where we don’t have the answers yet — we don’t even know if this is going to be the case, but we can imagine this being the case. And we just need the systems in place to deal with this.

Rob's computer security bad dreams

Rob Wiblin: OK, so we’ve been talking a little bit about my nightmares and my bad dreams and where Rob’s imagination goes when he imagines how this is all going to play out. Maybe let’s talk about another one of these that I’ve been mulling over recently as I’ve been reading a lot about AI and seeing what capabilities are coming online, this time a bit more related to computer security specifically.

So question is: if GPT-6 (or some future model that is more agentic than GPT-4 is) were instructed to hack into as many servers as possible and then use the new compute that’s available to it from having done that to run more copies of itself — that also then need to hack into other computer systems, and maybe train themselves into one direction or another, or use that compute to find new security vulnerabilities that they can then use to break into other sources of compute, and so on, on and on and on — how much success do you think it might have?

Lennart Heim: I think it’s definitely a worry which a bunch of people talk about. As I just hinted at before, I think computer security is definitely not great. I do think computer security at data centres is probably better than at other places. And I feel optimistic about detecting it: it might be a cat-and-mouse game here, but eventually you can detect it.

Why is this the case? Well, every server only has finite throughput. That’s just the case, as we just talked about, like there’s only that many FLOPS which can be run. So there’s a limited number of copies that can run there, and data centres are trying to utilise their compute as efficiently as possible. Right now you can expect most data centres run at least at 80% utilisation or something, because otherwise they’re just like throwing money out of the window — nobody wants to do this.

So if the GPT-6 system, if this bad worm, comes along just like hacks into the system, there’s only that much compute available which you eventually can use. Then it’s a bit tricky, because it’s like “there is a bit, there is a bit” — it’s kind of like a scheduling problem. Well, then if it would kick the other workloads out, somebody would notice. “I was running this science experiment and this never really finished. What’s going on there?” And data centres are already doing this, monitoring for this.

I think the best example we’ve already seen in the real world is these whole malwares where people’s personal computers were used for crypto mining. This malware is just running on your computer, and then they tried to use your processor to mine crypto for this hacker’s personal wallet to get more money. And people started noticing this, mostly like, “My computer is a bit slower than normal.” So people tried to modify this algorithm so it was only using 20% of the capability of their processing performance, so you’d not detect it. But if you actually go full throttle, literally, your laptop fan would turn on, and like, what’s going on there? If people just see their laptop’s utilisation going up to 100% without them doing anything, be suspicious. Probably you should reset this. Reset this thing.

And I think it’s the same for data centres. Where it’s like, “Oh, there is a computer worm here. They’re doing something. Let’s try to kick it out.” And then you can imagine a cat-and-mouse game which might be a bit more complicated. And maybe this is part of the reason why maybe I’m advocating for the thing which no data centre provider wants, which is like a big red off switch. Maybe I actually want this.

Because normally you’re trying to optimise uptime, because that’s what you want to go for as a data centre provider. They definitely have different tiers there. It’s like, you’re the highest uptime, you’re the coolest data centre out there. And here we just want like, “Gosh. We literally lost control here. Let me just turn off all of the things.” Maybe on a virtual or software level, like turning off virtual machines is not sufficient, because it’s a really sophisticated computer where it’s already trying to escape. You literally would just turn off the compute and figure out and have forensics of what’s been going on there and trying to defend it.

What we eventually exploit there is existing security bugs and holes, and usually we fix them if we just figure out what they are. This takes a little bit of time, but at least compared to AI systems, we have some clue. We at least develop these systems in a way we understand them, so we can try to fix it.

Rob Wiblin: I probably misspoke when I said if GPT-6 were instructed to do this — because it would be much more sensible to have a model that’s extremely specialised at hacking into all kinds of computer systems, which is a much narrower task than being able to deal with any input and output of language whatsoever. So it probably would be quite specialised.

Why do you need such small transistors?

Rob Wiblin: One thing I don’t quite understand is: A chip on a phone, you really need it to be very small — to have a lot of transistors in a tiny amount of space, and to use very little power. But if you’re running a supercomputer, you don’t really care about the physical footprint that much: you can stick it out of town, you can stick it in a basement, you can spread it out. So if you’re trying to create a lot of compute in order to train these models, why do you need the transistors to be so small? Why can’t you make them bigger, but just make a hell of a lot of them?

Lennart Heim: Smaller transistors are just more energy efficient. So if you go back, basically the FLOP per watt goes down over time. And energy costs are a big part of the cost; this enables you to eventually go cheaper. And you cannot just make these chips go faster because you produce less heat. Cooling is a big thing. When we talk about chips, the reason why your smartphone is not running that fast is that it’s only passively cooled, right? And this eventually takes a big hit to the performance.

Another thing to think about is that when we then have all of these chips and we want to hook them up, it just matters how long the cables are. We’re not talking about hooking something up to do your home internet — one gigabit or something — we’re talking about how we want high interconnect bandwidth, we want high-bandwidth zones. We literally want these things as close as possible to each other, and there are just limits to how long you can run these cables.

This is part of the reason why people are really interested in optical fibre, because you don’t have that much loss over longer cables. But then you have all of these other notions, like you need to turn the optical signal into an electronic signal. It’s an ongoing research domain, but people are definitely interested in this — just building a bigger footprint — because then you also have less heat per area.

This whole notion about data centres is really important to think about also, from a governance angle. I think that’s a big topic in the future. People should think carefully about this and see what we can do there and also how we can detect it. If we talk about advanced AI systems, we’re not talking about your GPU at home — we’re talking about supercomputers, we’re talking about facilities like AI production labs, whatever you want to call them. And there’s lots to learn there.

The field of compute governance needs people with technical expertise

Lennart Heim: For compute governance, we definitely need more technical expertise. I think that’s just a big thing. I think that’s also the biggest part where I’ve been able to contribute as somebody who’s studied computer engineering a bit and just has some idea how the stack eventually works. Within compute governance, you have really technical questions, where it’s pretty similar to doing a PhD where you actually work on important stuff. Then we also have the whole strategy and policy aspect, which is maybe more across the stack.

On the technical questions, I think we’ve pointed out a bunch of them during this conversation. There’s a bunch of things. What about proof of learning, proof of non-learning? How can we have certain assurances? Which mechanisms can we apply? How can we make data centres more safe? How can we defend against all these cyber things we’ve just discussed? There’s like a whole [lot] of things you can do there.

And also there are some questions we need computer engineers on. There are some questions which are more like software engineering type, and a bunch of them overlap from information security. How can you make these systems safe and secure if you implement these mechanisms? And I think a bunch of stuff is also just like cryptography, like people think about these proofs of learning and all the aspects there. So software engineers, hardware engineers, everybody across the stack feel encouraged to work on this kind of thing.

The general notion which I’m trying to get across is that up to a year ago, I think people were not really aware of AI governance. Like a lot of technical folks were like, “Surely, I’m just going to try to align these systems.” I’m like, sure, this seems great — I’m counting on you guys; I need you. But there’s also this whole AI governance angle, and we’re just lacking technical talent. This is the case in think tanks; this is the case in governments; this is the case within the labs, within their governance teams. There’s just a deep need for these kinds of people, and they can contribute a lot, in particular if you have expertise in these kinds of things.

You just always need to figure out, What can I contribute? Maybe you become a bit agnostic about your field or something. Like if you’ve been previously a compiler engineer: sorry, you’re not going to engineer compilers — that’s not going to be the thing — but you learn some things. You know how you go from software to hardware. You might be able to contribute. Compiler engineers and others across the stack could just help right now, for example, with chip export controls, like figuring out better ideas and better strategies there.

So a variety of things. But I’m just all for technical people considering governance — and this is, to large extent, also a personal fit consideration, right? If you’re more like a people person, sure, go into policy; you’re going to talk to a lot of folks. If you’re more like the researchy person, where you want to be alone, sure, you can now just do like deep-down research there. You’re not going to solve the alignment problem itself, but you’re going to invent mechanisms which enable us to coordinate and buy us more time to make this whole AI thing in a safe and sane way.

37

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities