A

AISafetyIsNotLongtermist

313 karmaJoined Jul 2022

Comments
9

Huh, I appreciate you actually putting numbers on this! I was suprised at nuclear risk numbers being remotely competitive with natural causes (let alone significantly dominating over the next 20 years), and I take this as an at least mild downwards update on AI dominating all other risks (on a purely personal level). Probably I had incorrect cached thoughts from people exclusively discussing extinction risk rather than just catastrophic risks, but from a purely personal perspective this distinction matters much less.

EDIT: Added a caveat to the post accordingly

Hmm. I agree that these numbers are low confidence. But for the purpose of acting and forming conclusions from this, I'm not sure what you think is a better approach (beyond saying that more resources should be put into becoming more confident, which I broadly agree with). 

Do you think I can never make statements like "low confidence proposition X is more likely than high confidence proposition Y"? What would feel like a reasonable criteria for being able to say that kind of thing?

More generally, I'm not actually sure what you're trying to capture with error bounds - what does it actually mean to say that P(AI X-risk) is in [0.5%, 50%] rather than 5%? What is this a probability distribution over? I'm estimating a probability, not a quantity. I'd be open to the argument that the uncertainty comes from 'what might I think if I thought about this for much longer'.

I'll also note that the timeline numbers are a distribution over years, which is already implicitly including a bunch of uncertainty plus some probability over AI never. Though obviously it could include more. The figure for AI x-risk is a point estimate, which is much dodgier.

And I'll note again that the natural causes numbers are at best medium confidence, since they assume the status quo continues!

would give you a value between 0.6% and 87%

Nitpick: I think you mean 6%? (0.37/(0.37+5.3) = 0.06). Obviously this doesn't change your core point.

I'm worried that this will move people to be less playful and imaginative in their thinking, and make worse intellectual, project, or career decisions overall, compared to more abstract/less concrete considerations of "how can I, a comfortable and privileged person with most of my needs already met, do the most good."

Interesting, can you say more? I have the opposite intuition, though here this stems from the specific failure mode of considering AI Safety as weird, speculative, abstract, and only affecting the long-term future - I think this puts it at a significant disadvantage compared to more visceral and immediate forms of doing good, and that this kind of post can help partially counter that bias.

I fairly strongly disagree with this take on two counts:

  1. The life expectancy numbers are not highly robust. They're naively extrapolating out the current rate of death in the UK to the future. This is a pretty dodgy methodology! I'm assuming that medical technology won't expand, that AI won't accelerate biotech research, that longevity research doesn't go anywhere, that we don't have disasters like a much worse pandemic or nuclear war, that there won't be new major public health hazards that disproportionately affect young people, that climate change won't substantially affect life expectancy in the rich world, that there won't be major enough wars to affect life expectancy in the UK, etc. The one thing that we know won't happen in the future is the status quo. 
    1. I agree that it's less dodgy than the AI numbers, but it's still on a continuum, rather than some ontological difference between legit numbers and non-legit numbers.
  2. Leaving that aside, I think it's extremely reasonable to compare high confidence and low confidence numbers so long as they're trying to measure the same thing. The key thing is that low confidence numbers aren't low confidence in any particular direction (if they were, we'd change to a different estimate). Maybe the AI x-risk numbers are way higher, maybe they're way lower. They're definitely noisier, but the numbers mean fundamentally the same thing, and are directly comparable. And comparing numbers like this is part of the process of understanding the implications of your models of the future, even if they are fairly messy and uncertain models.
    1. Of course, it's totally reasonable to disagree with the models used for these questions and think that eg they have major systematic biases towards exaggerating AI probabilities. That should just give you different numbers to put into this model. 
    2. As a concrete example, I'd like governments to be able to compare the risks of a nuclear war to their citizens lives, vs other more mundane risks, and to figure out cost-effectiveness accordingly. Nuclear wars have never happened in something remotely comparable to today's geopolitical climate, so any models here will be inherently uncertain and speculative, but it seems pretty important to be able to answer questions like this regardless.

I share this concern, and this was my biggest hesitation to making this post. I'm open to the argument that this post was pretty net bad because of that.

If you're finding things like existential dread concerning, I'll flag that the numbers in this post are actually fairly low in the grand scheme of total risks to you over your life - 3.7% just isn't that high. Dying young just isn't that likely.

I'm sorry that the post made you uncomfortable, and appreciate you flagging this constructively. Responses in thread.

I think the probability of death would go significantly up with age, undercutting the effect of this.

One project I've been thinking about is making (or having someone else make) a medical infographic that takes existential risks seriously, and ranks them accurately as some of the highest probability causes of death (per year) for college-aged people. I'm worried about this seeming too preachy/weird to people who don't buy the estimates though.

I'd be excited to see this, though agree that it could come across as too weird, and wouldn't want to widely and publicly promote it.

If you do this, I recommend trying to use as reputable and objective a source as you can for the estimates.

Fair points!

While promoting AI safety on the basis of wrong values may increase AI safety work, it may also increase the likelihood that AI will have wrong values (plausibly increasing the likelihood of quality risks), and shift the values in the EA community towards wrong values. It's very plausibly worth the risks, but these risks are worth considering.

I'm personally pretty unconvinced of this. I conceive of AI Safety work as "solve the problem of making AGI that doesn't kill everyone" more so than I conceive of it as "figure out humanity's coherent extrapolated vision and load it into a sovereign that creates a utopia". To the degree that we do explicitly load a value system into an AGI (which I'm skeptical of), I think that the process of creating this value system will be hard and messy and involve many stakeholders, and that EA may have outsized influence but is unlikely to be the deciding voice.