epistemic status: I am fairly confident that the overall point is underrated right now, but am writing quickly and think it's reasonably likely the comments will identify a factual error somewhere in the post.
Risk seems unusually elevated right now of a serious nuclear incident, as a result of Russia badly losing the war in Ukraine. Various markets put the risk at about 5-10%, and various forecasters seem to estimate something similar. The general consensus is that Russia, if they used a nuclear weapon, would probably deploy a tactical nuclear weapon on the battlefield in Ukraine, probably in a way with a small number of direct casualties but profoundly destabilizing effects.
A lot of effective altruists have made plans to leave major cities if Russia uses a nuclear weapon, at least until it becomes clear whether the situation is destabilizing. I think if that happens we'll be in a scary situation, but based on how we as a community collectively reacted to Covid, I predict an overreaction -- that is, I predict that if there's a nuclear use in Ukraine, EAs will incur more costs in avoiding the risk of dying in a nuclear war than the actual expected costs of dying in a nuclear war, more costs than necessary to reduce the risks of dying in a nuclear war, and more costs than we'll endorse in hindsight.
With respect to Covid, I am pretty sure the EA community and related communities incurred more costs in avoiding the risk of dying of Covid than was warranted. In my own social circles, I don't know anyone who died of Covid, but I know of a healthy person in their 20s or 30s who died of failing to seek medical attention because they were scared of Covid. A lot of people incurred hits to their productivity and happiness that were quite large.
This is especially true for people doing EA work they consider directly important: being 10% less impactful at an EA direct work job has a cost measured in many human or animal or future-digital-mind lives, and I think few people explicitly calculated how that cost measured up against the benefit of reduced risk of Covid.
If Russia uses a nuclear weapon in Ukraine, here is what I expect to happen: a lot of people will be terrified (correctly assessing this as a significant change in the equilibrium around nuclear weapon use which makes a further nuclear exchange much more likely.) Many people will flee major cities in the US and Europe. They will spend a lot of money, take a large productivity hit from being somewhere with worse living conditions and worse internet, and spend a ton of their time obsessively monitoring the nuclear situation. A bunch of very talented ops people will work incredibly hard to get reliable fast internet in remote parts of Northern California or northern Britain. There won't be much EAs not already in nuclear policy and national security can do, but there'll be a lot of discussion and a lot of people trying to get up to speed on the situation/feeling a lot of need to know what's going on constantly. The stuff we do is important, and much less of it will get done. It will take a long time for it to become obvious if the situation is stable, but eventually people will mostly go back to cities (possibly leaving again if there are further destabilizing events).
The recent Samotsvety forecast estimates that a person staying in London will lose 3-100 hours to nuclear risk in expectation (edit: which goes up by a factor of 6 in the case of actual tactical nuke use in Ukraine.) I think it is really easy for that person to waste more than 3-100 hours by being panicked, and possible to waste more than 20 - 600 hours on extreme response measures. And that's the life-hour costs of never fleeing; you also have the option of fleeing at a later point if there are further worrying developments, and it's probably a mistake to only model 'flee as soon as there's tactical nuke use' against 'stay no matter what' and not against 'flee slightly later'.
Some degree of costs incurred is quite reasonable. I think that during the Cuban Missile Crisis we were quite close to nuclear war, and probably reasonable people at the time would have (in addition to trying to prevent such a war) tried to leave major cities. I think it might make sense for people whose work is already remote, and who have a non-major-city place to stay, to leave. I think that the fact EAs are weird, and take our beliefs more seriously than most people, and take concrete actions based on expected-value arguments, is a strength. Certainly at some threshold of risk I'll leave with my family. But my overall expectation is that we'll end up causing more disruptions than are justified, at substantial expense to other work which is also about securing a good human future.
Some specific bad tradeoffs that seem easy to avoid:
- going to a very remote area dramatically reduces the risk of being hit in a nuclear exchange, but makes it incredibly inconvenient to work normally/collaborate with others/etc. DC, New York, and San Francisco are among the highest-likelihood-of-being-hit-in-a-full-nuclear-exchange cities in the US, and London obviously the highest in the UK, but if you go from those cities to nearby populous cities you probably get most of the benefit and incur lower costs in productivity/etc. In my opinion Americans who don't live in those cities and don't live next to a base from which we launch our own nukes shouldn't bother leaving (I don't know enough about continental Europe to have opinions there.) For San Franciscans, going to Santa Rosa is probably nearly as good as going to the middle of the mountains, or going to Eugene, and it's much less costly.
- we can avoid socially pressuring people towards acting more strongly than they endorse: if you don't care about this, I think you should feel licensed to not care about it and go about your life normally.
- if people do want to leave, I think they should keep in mind the productivity costs of 1) bad internet 2) isolation, and strongly prioritize going somewhere non-isolated with good internet access.
- keep in mind that while expected value sometimes implies reacting strongly to things with only a small chance of being really bad, you have to actually do the expected value calculations -- and your wellbeing, productivity, and impact on the world should be an input.
[[I acknowledge this is an out of scale reply, might make it a post one day soon. Thanks for reading]]
Thanks. I understand what you mean about EV. I just think problems sneak through in practice. In practice, people tend to weight things pretty badly and, especially in the middle of a mass-televised news cycle, be desperate for some control and hope things will work that won't. So I claim "reasonableness" or "certainty" for at least one of the variables is important. Else we are going to have a lot of Pascal's muggings.
To be concrete about my model too, even in March I think plenty of weird acts were the wrong call. It's hard to explain what I meant when I said "check your intuition" but I basically mean, reason it out and extrapolate from what you know, and also heed red flags and weird vibes (like the community's behavior starting to pattern-match mental illness and groupthink).
Anyway, we should expect that many (even most) interventions suggested at the early phase of a problem are somehow out-of-step with reality. The solution is not to do all of the ones your peers are doing just in case, as you seem to suggest, but to actively question and sort out the worst. You said we didn't know masking would be effective but we did it anyway based on EV... But, that isn't true. We did know masks were effective. So comparing EA masking to extreme-looking, always-speculative interventions does not follow. They are at different ends of the spectrum.
At risk of sounding harsh, EA is about using evidence and reason. I hope EAs don't shrug mistakes off with "we needed more data". We didn't always need more data. We needed the community to reason for itself, as it did about masks. To go "Does that make sense to me?" Then, do the "reasonable" things only. I guess I wasn't clear, but that's what I meant when I said "be very sure the act is effective" and "act reasonably". For acts without enough hard data, EAs could do better to check their intuition, model of the world, expected human behavior, and be more skeptical, even of copying other EAs.
Reasonableness and paranoia are by definition in conflict, so I'm disturbed that you essentially say that "paranoia" was the "right call" "(unless one was psychic)". We never dismissed trying to make predictions as being "psychic" before. Could we not have done better in March? Do you look back at the early extreme reactions and find no reason to be skeptical of them? Other people were skeptical, and then correct. The truth of what was useful wasn't in a time-activated lockbox. Even in March 2020, it existed in the world and was, if not observable, extrapolatable. I'm reminded of this EA short story:
"Impressive," [the mirror] says with a voice cool and smooth.... “I didn’t think you could succeed with raw power alone. Some might say you didn’t.”...
“So? I killed the Broken King. I stopped the summoning of the Old Horrors,” [the hero] challenge[s]. “What more could I have done?”
“Now you are asking the right question,” the mirror laughs. “What more indeed? You must train yourself until the answer comes naturally as the spellforce in your veins."
[Later:]
"Saved them? What could I do?" the hero frowns, and then the memories still settling in their head cohere. The mirror’s solution replays in vivid clarity...
"You should have taken the time to work it out," the mirror chastises. "You were capable of it."
Wrong is wrong. Something went wrong. It's okay to be wrong. Attempting was good. Even failing is okay. And now let's clarify that we should not repeat a tactic that went awry. We weren't skeptical and selective enough in the beginning, and it took some EAs a year longer than you'd expect to get their heads screwed on straight again about the whole thing. That's painful for everyone. But maybe it's human nature. I'm updating that it is. If you start with extreme, paranoid behavior and your community is encouraging you to be paranoid rather than question the paranoia, I doubt most people find it easy to correct later.
We knew that surgeons and other medical personnel wear masks for a reason, because we can assume their doing so for centuries has been expensive and hospital interests/board members wouldn't have kept it going if it weren't doing something worth the effort. We knew that doctors and nurses were still wearing masks during Covid: in other words, the Covid pandemic did not just randomly time itself with the global realization that masks had always been pointless. We knew that other country's citizens were wearing masks. We knew that COVID travelled through our breathing apparatus, which opens where the mask goes. Looking at the whole system, it is harder to be more sure than that. At some point you have to call your EV calculus what it is: "knowing something".