Sorted by New

# Alex HT's Comments

AI Governance Reading Group Guide

It's pretty blank - something like this

[Updated 6/21] 'Existential Risk and Growth' Summary

Yeah, that seems right to me.

On doubling consumption though, if you can suggest a policy that increases growth consistently, eventually you might cause consumption to be doubled (at some later time consumption under the faster growth will be twice as much as it would have been with the slower growth). Do you mean you don't think you could suggest a policy change that would increase the growth rate by much?

[Updated 6/21] 'Existential Risk and Growth' Summary

Great to hear this has been useful!

I think if is around 1 then yes, spreading longtermism probably looks better than accelerating growth. Though I don't know how expensive it is to double someone's consumption in the long-run.

Doubling someone's consumption by just giving them extra money might cost $30,000 for 50 years=~$0.5million. It seems right to me that there are ways to reduce the discount rate that are much cheaper than half a million dollars for 13 basis points. Eg. some community building probably takes a person's discount rate from around 2% to around 0% for less than half a million dollars.

I don't know how much cheaper it might be to double someone's consumption by increasing growth but I suspect that spreading longtermism still looks better for this value of .

How confident are you that is around 1? I haven't looked into it and don't know how much consensus there is.

If you value future people, why do you consider near term effects?

What do you think absorbers might be in cases of complex cluelessness? I see that delaying someone on the street might just cause them to spend 30 seconds less procrastinating, but how might this work for distributing bednets, or increasing economic growth?

Maybe there's an line of argument around nothing being counterfactual in the long-term - because every time you solve a problem someone else was going to solve it eventually. Eg. if you didn't increase growth in some region, someone else would have 50 years later. And now you did it they won't. But this just sounds like a weirdly stable system and I guess this isn't what you have in mind

[Stats4EA] Expectations are not Outcomes

Thanks for writing this. I hadn't though it about this explicitly and think it's useful. The bite-sized format is great. A series of posts would be great too.

Existential Risk and Economic Growth

So you think the hazard rate might go from around 20% to around 1%? That's still far from zero, and with enough centuries with 1% risk we'd expect to go extinct.

I don't have any specific stories tbh, I haven't thought about it (and maybe it's just pretty implausible idk).

Existential Risk and Economic Growth

Not the author but I think I understand the model so can offer my thoughts:

1. Why do the time axes in many of the graphs span hundreds of years? In discussions about AI x-risk, I mostly see something like 20-100 years as the relevant timescale in which to act (i.e. by the end of that period, we will either go extinct or else build an aligned AGI and reach a technological singularity). Looking at Figure 7, if we only look ahead 100 years, it seems like the risk of extinction actually goes up in the accelerated growth scenario.

The model is looking at general dynamics of risk from the production of new goods, and isn’t trying to look at AI in any kind of granular way. The timescales on which we see the inverted U-shape depend on what values you pick for different parameters, so there are different values for which the time axes would span decades instead of centuries. I guess that picking a different growth rate would be one clear way to squash everything into a shorter time. (Maybe this is pretty consistent with short/medium AI timelines, as they probably correlate strongly with really fast growth).

I think your point about AI messing up the results is a good one -- the model says that a boom in growth has a net effect to reduce x-risk because, while risk is increased in the short-term, the long-term effects cancel that out. But if AI comes in the next 50-100 years, then the long-term benefits never materialise.

2. What do you think of Wei Dai's argument that safe AGI is harder to build than unsafe AGI and we are currently putting less effort into the former, so slower growth gives us more time to do something about AI x-risk (i.e. slower growth is better)?

Sure, maybe there’s a lock-in event coming in the next 20-200 years which we can either

• Delay (by decreasing growth) so that we have more time to develop safety features, or
• Make more safety-focussed (by increasing growth) so it is more likely to lock in a good state

I’d think that what matters is resources (say coordination-adjusted-IQ-person-hours or whatever) spent on safety rather than time that could available to be spent on safety if we wanted. So if we’re poor and reckless, then more time isn’t necessarily good. And this time spent being less rich also might make other x-risks more likely. But that’s a very high level abstraction, and doesn’t really engage with the specific claim too closely so keen to hear what you think.

3. What do you think of Eliezer Yudkowsky's argument that work for building an unsafe AGI parallelizes better than work for building a safe AGI, and that unsafe AGI benefits more in expectation from having more computing power than safe AGI, both of which imply that slower growth is better from an AI x-risk viewpoint?

The model doesn’t say anything about this kind of granular consideration (and I don’t have strong thoughts of my own).

4. What do you think of Nick Bostrom's urn analogy for technological developments? It seems like in the analogy, faster growth just means pulling out the balls at a faster rate without affecting the probability of pulling out a black ball. In other words, we hit the same amount of risk but everything just happens sooner (i.e. growth is neutral).

In the model, risk depends on production of consumption goods, rather than the level of consumption technology. The intuition behind this is that technological ideas themselves aren’t dangerous, it’s all the stuff people do with the ideas that’s dangerous. Eg. synthetic biology understanding isn’t itself dangerous, but a bunch of synthetic biology labs producing loads of exotic organisms could be dangerous.

But I think it might make sense to instead model risk as partially depending on technology (as well as production). Eg. once we know how to make some level of AI, the damage might be done, and it doesn’t matter whether there are 100 of them or just one.

And the reason growth isn’t neutral in the model is that there are also safety technologies (which might be analogous to making the world more robust to black balls). Growth means people value life more so they spend more on safety.

5. Looking at Figure 7, my "story" for why faster growth lowers the probability of extinction is this: The richer people are, the less they value marginal consumption, so the more they value safety (relative to consumption). Faster growth gets us sooner to the point where people are rich and value safety. So faster growth effectively gives society less time in which to mess things up (however, I'm confused about why this happens; see the next point). Does this sound right? If not, I'm wondering if you could give a similar intuitive story.

Sounds right to me.

6. I am confused why the height of the hazard rate in Figure 7 does not increase in the accelerated growth case. I think equation (7) for δ_t might be the cause of this, but I'm not sure. My own intuition says accelerated growth not only condenses along the time axis, but also stretches along the vertical axis (so that the area under the curve is mostly unaffected).

The hazard rate does increase for the period that there is more production of consumption goods, but this means that people are now richer, earlier than they would have been so they value safety more than they would otherwise.

As an extreme case, suppose growth halted for 1000 years. It seems like in your model, the graph for hazard rate would be constant at some fixed level, accumulating extinction probability during that time. But my intuition says the hazard rate would first drop near zero and then stay constant, because there are no new dangerous technologies being invented. At the opposite extreme, suppose we suddenly get a huge boost in growth and effectively reach "the end of growth" (near period 1800 in Figure 7) in an instant. Your model seems to say that the graph would compress so much that we almost certainly never go extinct, but my intuition says we do experience a lot of risk for extinction. Is my interpretation of your model correct, and if so, could you explain why the height of the hazard rate graph does not increase?

Hmm yeah, this seems like maybe the risk depends in part on the rate of change of consumption technologies - because if no new techs are being discovered, it seems like we might be safe from anthropogenic x-risk.

But, even if you believe that the hazard rate would decay in this situation, maybe what's doing the work is that you're imagining that we're still doing a lot of safety research, and thinking about how to mitigate risks. So that the consumption sector is not growing, but the safety sector continues to grow. In the existing model, the hazard rate could decay to zero in this case.

I guess I'm also not sure if I share the intuition that the hazard rate would decay to zero. Sure, we don't have the technology right now to produce AGI that would constitute an existential risk but what about eg. climate change, nuclear war, biorisk, narrow AI systems being used in really bad ways? It seems plausible to me that if we kept our current level of technology and production then we'd have a non-trivial chance each year of killing ourselves off.

What's doing the work for you? Do you think the probability of anthropogenic x-risk with our current tech is close to zero? Or do you think that it's not but that if growth stopped we'd keep working on safety (say developing clean energy, improving relationships between US and China etc.) so that we'd eventually be safe?

What posts do you want someone to write?

Now done here. It's a ~10 page summary that someone with college-level math can understand (though I think you could read it, skip the math, and get the general idea).

If you value future people, why do you consider near term effects?

Ah yeah that makes sense. I think they seemed distinct to me because one seems like 'buy some QALYS now before the singularity' and the other seems like 'make the singularity happen sooner' (obviously these are big caricatures). And the second one seems like it has a lot more value than the first if you can do it (of course I'm not saying you can). But yeah they are the same in that they are adding value before a set time. I can imagine that post being really useful to send to people I talk to - looking forward to reading it.