All of UwU's Comments + Replies

The fellowship will cover what we currently consider to be the most important sources of s-risk (TAI conflict, risks from malevolent actors).

Any reason CLR believes that to be the case specifically? For instance, it's argued on this page that botched alignment attempts/partially aligned AIs (near miss) & unforeseen instrumental drives of an unaligned AI are the 2 likeliest AGI-related s-risks, with malevolent actors (deliberately suffering-aligned AI) currently a lesser concern. I guess TAI conflict could fall under the second category, as an instru... (read more)

4
Anthony DiGiovanni
6mo
Thanks for asking — you can read more about these two sources of s-risk in Section 3.2 of our new intro to s-risks article. (We also discuss "near miss" there, but our current best guess is that such scenarios are significantly less likely than other s-risks of comparable scale.)

A bunch of scenarios are collected in the s-risk sub wiki

-3[comment deleted]1y
UwU
2y17
0
0

I agree nuclear winter risk is overblown and I'm glad to see more EAs discussing that. But I think you're also overrating the survivability of SSBNs, especially non-American ones. They are not a One Weird Trick, Just Add Water for unassailable second strike capability, with upkeep/maintenance only being one aspect of that. Geography plays a huge role in how useful they are, with the US deciding to base most of its warheads on SSBNs because they have the most favourable conditions for them (unrestricted access to and naval dominance of two oceans). In contr... (read more)

1
bean
2y
In retrospect, I should have been more clear in my claim on submarine invulnerability, which was mostly meant to apply to the sort of thing you could reliably do during an attempt to preemptively take out a nuclear arsenal.  And yes, obviously more to the US than elsewhere.  But note that the link you provide is to an SSN, not an SSBN, and MAD is not a new technology.  The first deployment of that I'm aware of was to guard the Straits of Gibraltar in WWII, and if anything it's being phased out these days.  
UwU
2y32
0
0

Look into suffering-focused AI safety which I think is extremely important and neglected (and s-risks).

8
Mau
2y
More specifically, I think there's a good case to be made* that most of the expected disvalue of suffering risks comes from cooperation failures, so I'd especially encourage people who are interested in suffering risks and AI to look into cooperative AI and cooperation on AI. (These are areas mentioned in the paper you cite and in related writing.) (*Large-scale efforts to create disvalue seem like they would be much more harmful than smaller-scale or unintentional actions, especially as technology advances. And the most plausible reason I've heard for why such efforts might happen is that: various actors might commit to creating disvalue under certain conditions, as a way to coerce other agents, and they would then carry out these threats if the conditions come about. This would leave everyone worse off than they could have been, so it is a sort of cooperation failure. Sadism seems like less of a big deal in expectation, because many agents have incentives to engage in coercion, while relatively few agents are sadists.) (More closely related to my own interest in them, cooperation failures also seem like one of the main types of things that may prevent humanity from creating thriving futures, so this seems like an area that people with a wide range of perspectives on the value of the future can work together on :)
UwU
2y28
0
0

I disagree with the claim that the overall accident risk is going down. While it's probably true early warning systems are getting more reliable (though the actual degree of this is really hard to gauge due to their complexity)[1], a third party (China) adopting launch on warning arguably raises the risk at least 50%, if not more due to initial kinks. Also, as many have pointed out, the emerging trilateral dynamic of three nuclear peers is unprecedented in history and less stable.

Also, what would count as an accidental nuclear war? I think e.g. the US laun... (read more)

Little known detail about the Arkhipov incident. Unsure if true, but if so it sounds like he agreed to fire the torpedo and it all came down to the coincidence of the light getting wedged in the hatch making a few second-difference. Something that may not have happened if the signals officer's motor neurons had fired just slightly differently.

0
Nathan_Barnard
2y
wow that's really interesting, I'll look more deeply into that. It's defintely not what I've read happened, but at this point I think it's proably worth me reading the primary sources rather than relying on books. 
UwU
2y13
0
0

I think the OP is advocating a prize for solving the whole problem, not specific subproblems, which is a novel and interesting idea. Kind of like the $1M Millennium Prize Problems (presumably we should offer far more).

If you offer a prize for the final thing instead of an intermediate one people may also take more efficient paths to the goal than the one we're looking at. I see no downside to doing it, I mean you don't lose any money unless someone actually presents a real solution.

UwU
2y14
0
0

Hey Michael, sorry I didn't get around to commenting on this before you published haha. Long thought dump below:

I'm not sure if they count as "technological developments", but 2 of the largest things I see contributing to nuclear risk are development of ballistic missile defence (BMD) and proliferation of tactical nuclear weapons (TNWs).

The dangers from BMD are manyfold. One is being the cause of a conventional conflict. E.g. As the US continues to develop its maritime ICBM intercept capability, it'll pose a major threat to foreign arsenals. If a significa... (read more)

UwU
2y14
0
0

Just one thought: there are so many ways for a nuclear war to start accidentally or through miscalculations (without necessarily a conventional war) that it just seems so absurd to see estimates like 0.1%. A big part of it is even just the inscrutable failure rate of complex early warning systems composed of software, ground/space based sensors and communications infrastructure. False alarms are much likelier to be acted on during times of high tension as I pointed out. E.g., during that incident Yeltsin, despite observing a weather rocket with a similar f... (read more)

A big part of it is even just the inscrutable failure rate of complex early warning systems composed of software, ground/space based sensors and communications infrastructure

This list of nuclear close calls has 16 elements. Laplace's law of succession would give a close call a 5.8% of resulting in a nuclear detonation. Again per Laplace's law, with 16 close calls in (2022-1953), this would imply a (16+1)/(2022-1953+2) = 24% chance of seeing a close call each year. Combining the two forecast gives us 24% of 5.8%, which is 1.4%/year. But earlier warning syst... (read more)

UwU
2y12
0
0

One point about the Pentagon figure, they said "700 warheads by 2027 and at least 1000 by 2030"[1]. Meaning most likely over 1k. I actually made a bet that they will revise the estimate up again in this/next year's report, e.g. this year we may see "900-1000 by 2027 and 1200-1500 by 2030". They said the stockpile was only expected to double over the decade to the ~400s in the 2020 report to Congress, so by increasing estimates more gradually they're probably mitigating a loss of face & hard questions over the massive intelligence failure. Signs point t... (read more)

1
Ryan Beck
2y
I'm not well-versed in this area but reading through the Chinese nuclear notebook from November 2021 they seem kind of skeptical of claims like this and point out that China could also be intending the silos to be a "shell game". Quoting from the notebook: Would you disagree with that assessment? I agree that the trade war issue is probably low impact, but I focused on it because it has few downsides and potential upsides for nuclear risk. What ways to reduce China-US nuclear risk do you suggest? From what I've seen so far (which is admittedly very little) it seems like there are very few feasible options to reduce nuclear risk with China, and most available options involve a lot of unknowns with regard to implementation and effectiveness and potentially have significant downsides.
UwU
2y34
0
0

https://reducing-suffering.org/near-miss/

Just gonna boost this excellent piece by Tomasik. I think partial alignment/near-misses causing s-risk is potentially an enormous concern. This is more true the shorter timelines are and thus the more likely people are to try using "hail mary" risky alignment techniques. Also more true for less principled/Agent Foundations-type alignment directions.

2
mic
2y
Can someone provide a more realistic example of partial alignment causing s-risk than SignFlip or MisconfiguredMinds? I don't see either of these as something that you'd be reasonably likely to get by say, only doing 95% of the alignment research necessary rather than 110%.
UwU
2y12
0
0

See my comments here and here for a bit of analysis on targeting/risks of various locations.

Btw I want to add that it may be even more prudent to evacuate population centers preemptively than some think, as some have suggested countervalue targets are unlikely to be hit at the very start of a nuclear war/in a first strike. That's not entirely true since there are many ways cities would be hit with no warning. If Russia or China launches on warning in response to a false alarm, they would be interpreting that act as a (retaliatory) second strike and thus ma... (read more)

UwU
2y30
0
0

Also, not sure where best to post this, but here's a nice project on nuclear targets in the US (+ article). I definitely wouldn't take it at face value, but it sheds some light on which places are potential nuclear targets at least, non-exhaustively.

2
NunoSempere
2y
Thanks!
UwU
2y25
0
0

You mentioned the successful SM-3 intercept test in 2020. While it's true it managed to intercept an "ICBM-representative target", and can be based from ships anywhere they sail (thus posing a major potential threat to the Chinese/NK deterrent in the future), I don't know if I (or the US military) would call it a meaningful operational capability yet. For one we don't even know its success rate. The more mature (and only other) system with ICBM intercept capability, the Ground-Based Interceptor, has barely 50%. [1] I'm not sure what you meant by "sending i... (read more)

UwU
2y30
0
0

Also, not sure where best to post this, but here's a nice project on nuclear targets in the US (+ article). I definitely wouldn't take it at face value, but it sheds some light on which places are potential nuclear targets at least, non-exhaustively.

4
Misha_Yagudin
2y
Thanks that is useful and interesting! (re: edit — I agree but maybe at 90% given some uncertainty about readiness.)
5
NunoSempere
2y
tll;dr: The 60% was because I didn't really know much about the AMB capabilities early on and gave it around a 50-50. I updated upwards as we researched this more, but not by that much. This doesn't end up affecting the aggregate that much because other forecasters (correctly, as it now seems), disagreed with me on this. Hey, thanks for the thoughtful comment, it looks like you've looked into ABM much more than I/(we?) have. The 60% estimate was mine. I think I considered London being targetted but not being hit as a possibility early on, but we didn't have a section on it. By the time we had looked into ICMBs more,  it got incorporated into "London is hit given an escalation" and then "Conditional on Russia/NATO nuclear exchange killing at least one person, London is hit with a nuclear weapon".  But my probability for "Conditional on Russia/NATO nuclear exchange killing at least one person, London is hit with a nuclear weapon" is the lowest in the group, and I think this was in fact because I was thinking that they could be intercepted. I think I updated a bit when reading more about it and when other forecasters pushed against that, but not that much. Concretely, I was at ~5% for "Conditional on Russia/NATO nuclear exchange killing at least one person, London is hit with a nuclear weapon", and your comment maybe moves me to ~8-10%. I was the lowest in the aggregate for that subsection, so the aggregate of 24 micromorts doesn't include it, and so doesn't change. Or, maybe your comment does shift other forecasters by ~20%, and so the aggregate moves from 24 to 30 micromorts. Overall I'm left wishing I had modeled and updated the "launched but intercepted" probability directly throughout. Thanks again for your comment.
UwU
2y12
0
0

Don't buy the stuff about expecting a famine that kills billions at all? Especially since she didn't seem to have dug into the actual criticisms of the nuclear winter theory in her post sequence, e.g. the independent components of the theory.  I think very likely (>90%) there won't be any change in temperature at all, which will be the case if any of those components fail. And as I understand it she has since updated towards being less bullish on it since those posts, and people who succeeded her at RP don't think nuclear winter is that likely either.

UwU
2y14
0
0

Imo, evacuating to another country when a nuclear war looks literally imminent may not even be a good move because you'd have to enter a large city with an international airport with transcontinental flights, and the increased risk while you're reentering the city & waiting for your flight is probably greater than the survival benefits from arriving at your SH destination, not to mention flights would probably be booked out if things really looked that dire. A better strategy would be to evacuate whenever the risk looked heightened, but then you'd run ... (read more)

2
Fin
2y
Yes, if imminent  literally  means missiles are inbound it is too late, but if you've decided there is a high probability of nuclear attack in the next couple of weeks to months evacuating could still be a good strategy. For Ben Landau Taylors signup list, he certainly means  evacuating well before missiles are launched.  Certainly small towns are not at much risk of being hit directly. If you were concerned about an all out war between the US and Russia though, evacuating to somewhere in the southern hemisphere could make a lot of sense. Yields of nuclear weapons can vary a lot. Like you said, no one knows exactly where would be targeted, but if your near a large city that might be hit, considering how to shelter and evacuate after an attack could still be quite useful. I agree attempting to evacuate a city as missiles were being launched would not result in good outcomes.   
1
SeanEngelhart
2y
I'm curious about your thoughts on this: hypothetically, if I were to relocate now, do you see the duration of my stay in the lower risk area as being indefinitely long? It seems unclear to me what exact signals--other than pretty obvious ones like the war ending, which I'd guess are much less likely to happen soon--would be clear green lights to move back to my original location. I'm wondering because I'm trying to assess feasibility. For my situation, it feels like the longer I'm away, the higher the cost (not specifically monetary) of the relocation.
UwU
2y19
0
0

Nuclear winter is a very unlikely, highly conjunctive theory which requires many independent things to ALL happen perfectly, which are already individually suspect. E.g. that cities will all firestorm after being hit by airburst detonations (which itself relies on assumptions like adequate fuel loading per square meter, collapsed structures from the air blast not suffocating the oxygen, etc.), that this will burn in a way producing lots of black carbon, that this carbon will be nearly all lofted into the stratosphere, that this will block a high percentage... (read more)

4
mic
2y
What do you think about the conclusions of How bad would nuclear winter caused by a US-Russia nuclear exchange be? - EA Forum (effectivealtruism.org) ?
1
Henry Askin
2y
>I'm not sure if the bolthole idea is referring to an escape for EAs in particular or relocating as many people as possible in general, Perhaps "bolthole" is not quite the term I'm looking for, at least in the sense of primarily being about relocating individuals. Rather, I'm using it as a catch-all term for all "post-apocalyptic" preparations. A seed bank and/or data bank located in New Zealand would be good examples. 
UwU
2y40
0
0

I'm not overly concerned with the news from this morning. In fact I expected them to raise the nuclear force readiness prior to or simultaneously to commencing the invasion, not now, which is expected going into a time of conflict/high tension from normal peacetime readiness. I had about a 5% chance this will escalate to a nuclear war going into it, and it's not much different now, certainly not above 10% (For context, my odds of escalation to full countervalue exchange in a US intervention in a Taiwan reunification campaign would be about 75%). Virtually ... (read more)

6
Misha_Yagudin
2y
That seems way too high to me: are you willing to bet at 5%? (For epistemic purposes only. I hope no one reading will be offended.) If so confirm here and PM me on Forum to figure out the details.
1
axioman
2y
Thank you!  5% does sound very alarming to me, and is definitely a lot higher than I would have said at the beginning of the crisis (without having thought about it much, then). 
UwU
2y11
0
0

and they value Chinese lives more than non-Chinese.

Right, and the alternative here (US leaders) don't do that?

UwU
2y13
0
0

For AI safety - maybe Redwood has the most room for funding? They seem to be the most interested in growth (correct me if I'm wrong). And even if the existing players don't have more room, other ways need to be thought of to scale up further through funding as the field is clearly still too small to compete in the race against the titanic field of AI capabilities.

Agree longevity needs to be funded more as well, though lots of aging billionaires like Bezos seem to be throwing tons of money at it these days too so maybe EA money would be much less useful/uniquely needed there than e.g. AI alignment.