All of Jeffrey Ladish's Comments + Replies

@Daniel_Eth asked me why I choose 1:1 offsets. The answer is that I did not have a principled reason for doing so, and do not think there's anything special about 1:1 offsets except that they're a decent schelling point. I think any offsets are better than no offsets here. I don't feel like BOTECs of harm caused as a way to calculate offsets are likely to be particularly useful here but I'd be interested in arguments to this effect if people had them. 

Really appreciate you! It's felt stressful sometimes as just someone in the community and it's hard to imagine how stressful it would feel for me in your shoes. Really appreciate your hard work, and I think the EA movement is significantly improved through your hard work maintaining and improving and moderating the forum, and all the mostly-unseen-but-important work mitigating conflicts & potential harm in the community.

I think it's worth noting that that I'd expect you would gain a significant relative advantage if you get out of cities before other people, such that acting later would be a lot less effective at furthering your survival & rebuilding goals.

I expect the bulk of the risk of an all out nuclear war to happen in the couple of weeks after the first nuclear use. If I'm right, then the way to avoid the failure mode you're identifying is returning in a few weeks if no new nuclear weapons have been used, or similar.

1
Kelsey Piper
2y
Hmm, what mechanism are you imagining for advantage from getting out of cities before other people? You could have already booked an airbnb/rented a house/etc before the rush, but that's an argument for booking the airbnb/renting the house, not for living in it. 

I think the problem is that the vagueness of the type of commitment the GWWC represents. If it's an ironclad commitment, people should lose a lot of trust in you. If it was a "best of intention" type commitment, people should only lose a modest amount of trust in you. I think the difference matters!

6
Henry Howard
2y
And the GWWC pledge seems to fit at a nice balance point between those two, where the cost is not so off-putting that no-one takes the pledge and not so non-committal that it's meaningless

I super agree it's important not to conflate  "do you keep actually-thoughtful promises you think people expected you to interpret as real commitments" and "do you take all superficially-promise-like-things as serious promises"!  And while I generally want people to think harder about what they're asking for wrt commitments, I don't think going overboard on strict-promise interpretations is good. Good promises have a shared understanding between both parties. I think a big part of building trust with people is figuring out a good shared language ... (read more)

I think it will require us to reshape / redesign most ecosystems & probably pretty large parts of many / most animals. This seems difficult but well within the bounds of a superintelligence's capabilities. I think that at least within a few decades of greater-than-human-AGI we'll have superintelligence, so in the good future I think we can solve this problem.

I don't think an ordinary small/medium tech company can succeed at this. I think it's possible with significant (extraordinary) effort, but that sort of remains to be seen.

As I said in another thread:

>> I think it's an open question right now. I expect it's possible with the right resources and environment, but I might be wrong. I think it's worth treating as an untested hypothesis ( that we can secure X kind of system for Y application of resources ), and to try to get more information to test the hypothesis. If AGI development is impossible to secu... (read more)

I agree that a lot of the research today by leading labs is being published. I think the norms are slowly changing, at least for some labs. Deciding not to (initially) release the model weights of GPT-2 was a big change in norms iirc, and I think the trend towards being cautious with large language models has continued. I expect that as these systems get more powerful, and the ways they can be misused gets more obvious, norms will naturally shift towards less open publishing. That being said, I'm not super happy with where we're at now, and I think a lot o... (read more)

I think it's an open question right now. I expect it's possible with the right resources and environment, but I might be wrong. I think it's worth treating as an untested hypothesis ( that we can secure X kind of system for Y application of resources ), and to try to get more information to test the hypothesis. If AGI development is impossible to secure, that cuts off a lot of potential alignment strategies. So it seems really worth trying to find out if it's possible.

I expect most people to think either that AMF or MIRI is much more likely to do good. So from most agent's perspectives, the unilateral defection is only better if their chosen org wins. If someone has more of a portfolio approach that weights longtermist and global poverty  efforts similarly, then your point holds. I expect that's a minority position though.

I see you define it a few paragraphs down, but at the top would be helpful I think

2
Sanjay
3y
I have now expanded the acronym when it's used in the first sentence.
6
Jeffrey Ladish
3y
I see you define it a few paragraphs down, but at the top would be helpful I think

Yeah, I would agree with that! I think radiological weapons are some of the most relevant nuclear capabilities / risks to consider from a longterm perspective, due to their risk of being developed in the future.

The part I added was:

"By a full-scale war, I mean a nuclear exchange between major world powers, such as the US, Russia, and China, using the complete arsenals of each country. The total number of warheads today (14,000) is significantly smaller than during the height of the cold war (70,000). While extinction from nuclear war is unlikely today, it may become more likely if significantly more warheads are deployed or if designs of weapons change significantly."

I also think indirect extinction from nuclear war is unlikely, but I would like to address this m... (read more)

I mean that the amount required to cover every part of the Earth's surface  would serve no military purpose. Or rather, it might enhance one's deterrent a little bit, but it would
1) kill all of one's own people, which is the opposite of a defense objective
2) not be a very cost effective way to improve one's deterrent. In nearly all cases it would make more sense to expand second strike capabilities by adding more submarines, mobile missile launchers, or other stealth second strike weapons.

Which isn't to say this couldn't happen! Military research team... (read more)

2
MichaelA
3y
Yeah, that all makes sense to me.  But that still seems like it'd be consistent with thinking that quite a large number of radiological weapons would be developed. E.g., enough to kill 90% of the population of the US, but not the entire world's population. This would of course not directly pose an extinction risk by itself, but seems like it could still be significant from a longtermist perspective when combined with other things (e.g., a large nuclear winter, or a view in which that level of death from conflict could be enough to cause negative trajectory changes).  Would you agree with that? Or do you think there are also separate reasons to think it's very unlikely that even that many radiological weapons would be developed, or that they wouldn't substantially increase how much longtermists should worry about nuclear war? (I'm asking for my own understanding, not really to make a point; I don't have a pre-existing stance on these questions.)

FWIW, my guess is that you're already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it'd be good to describe in detail "here is how this combination of different hazards could kill everyone"]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I'd be happy to review a post prior to publication, though I'm not sure if I'm

... (read more)

This may be in the Brookings estimate, which I haven't read yet, but I wonder how much cost disease + reduction in nuclear force has affected the cost per warhead / missile. My understanding is that many military weapon systems get much more expensive over time for reasons I don't well understand.

Warheads could be altered to increase the duration of radiation effects from fallout, but this would would also reduce their yield, and would represent a pretty large change in strategy. We've gone 70 years without such weapons, which the recent Russian submersibl... (read more)

I think I gave the impression that I'm making  a more expansive claim than I actually mean to make, and will edit the post to clarify this.  The main reason I wanted to write this post is that a lot of people, including a number in the EA community, start with the conception that a nuclear war is relatively likely to kill everyone, either for nebulous reason or because of nuclear winter specifically. I know most people who've examined it know this is wrong, but I wanted that information to be laid out pretty clearly, so someone could get a summar... (read more)

4
MichaelA
3y
Have you made these edits yet, or is this still on the to-do list? Having just read the post, I strongly agree with Max's assessment, and still think readers could very easily round this post's claims off to "Nuclear war is very unlikely to be a big deal for longtermists". The key changes that I'd see as valuable would be:  * changing the title (maybe to something like "Nuclear war is unlikely to directly cause human extinction") * explicitly saying something in the introductory part about how the possibilities of nuclear war causing indirect extinction or other existential catastrophes/trajectory changes are beyond the scope of this post * (There may of course also be other changes that would accomplish similar results) I also do think that this post contains quite valuable info. And I'd agree that there are some people, including in the EA community, who seem much too confident that nuclear war would directly cause extinction (though, like Max, I'm not aware of anyone who meets that description and has looked into the topic much).  So if this post had had roughly those tweaks / when you make roughly those tweaks, I'd think it'd be quite valuable. (Unfortunately, in its present form, I worry that the post might create more confusion than it resolves.) I'd also be excited to see the sort of future work you describe on compounding risks and recovery from collapse! I think those topics are plausibly important and sorely under-explored. 
8
Max_Daniel
3y
This agrees with my impression, and I do think it's valuable to correct this misconception. (Sorry, I think it would have been better and clearer if I had said this in my first comment.) This is why I favor work with somewhat changed messaging/emphasis over no work. I'm not sure we disagree. My current best guess is that most plausible kinds of civilizational collapse wouldn't be an existential risk, including collapse caused by nuclear war. (For basically the reasons you mention.) However, I feel way less confident about this than about the claim that nuclear war wouldn't immediately kill everyone. In any case, my point was not that I in fact think this is likely, but just that it's sufficiently non-obvious that it would be costly if people walked away with the impression that it's definitely not a problem. This sounds like a very valuable topic, and I'm excited to see more work on it.  FWIW, my guess is that you're already planning to do this, but I think it could be valuable to carefully consider information hazards before publishing on this [both because of messaging issues similar to the one we discussed here and potentially on the substance, e.g. unclear if it'd be good to describe in detail "here is how this combination of different hazards could kill everyone"]. So I think e.g. asking a bunch of people what they think prior to publication could be good. (I'd be happy to review a post prior to publication, though I'm not sure if I'm particularly qualified.)

Some quick answers to your questions based on my current beliefs:

  • Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?

I think the answer in the short term is no, if "completely collapses" means something like "is unable to get back to at least 1950's level technology in 500 years". I think think there are a number of things that could reduce humanity's "technological carrying capacity". I'm currently working on ex... (read more)

7
Howie_Lempel
4y
You say no to "Is there a high chance that human population completely collapses as a result of less than 90% of the population being wiped out in a global catastrophe?" and say "2) Most of these collapse scenarios would be temporary, with complete recovery likely on the scale of decades to a couple hundred years." I feel like I'd much better understand what you mean if you were up for giving some probabilities here even if there's a range or they're imprecise or unstable. There's a really big range within "likely" and I'd like some sense of where you are on that range.

I want to give a brief update on this topic. I spent a couple months researching civilizational collapse scenarios and come to some tentative conclusions. At some point I may write a longer post on this, but I think some of my other upcoming posts will address some of my reasoning here.

My conclusion after investigating potential collapse scenarios:

1) There are a number of plausible (>1% probability) scenarios in the next hundred years that would result in a "civilizational collapse", where an unprecedented number of people die and key technolo... (read more)

3
Josh Jacobson
3y
Are you saying here that you believe the scenarios add up to a greater than 1% probability of collapse in the next hundred years, or that you believe there are multiple scenarios that each have greater than 1% probability?

I do know of a project here that is pretty promising, related to improving secure communication between nuclear weapons states. If you know people with significant expertise who might be interested pm me.

This seems approximately right. I have some questions around how competitive pressures relate to common-good pressures. It's sometimes the case that they are aligned (e.g. in many markets).

Also, there may be a landscape of coalitions (which are formed via competitive pressures), and some of these may be more aligned with the common good and some may be less. And their alignment with the public good may be orthogonal to their competitiveness / fitness.

It would be weird if it were completely orthogonal, but I would expect it to naturally be somewhat orthogonal.

1
JustinShovelain
4y
I agree with your thoughts. Competitiveness isn't necessarily fully orthogonal to common good pressures but there generally is a large component that is, especially in tough cases. If they are not orthogonal then they may reach some sort of equilibrium that does maximize competitiveness without decreasing common good to zero. However, in a higher dimensional version of this it becomes more likely that they are mostly orthogonal (apriori, more things are orthogonal in higher dimensional spaces) and if what is competitive can sorta change with time walking through dimensions (for instance moving in dimension 4 then 7 then 2 then...) and iteratively shifting (this is hard to express and still a bit vague in my mind) then competitiveness and common good may become more orthogonal with time. The Moloch and Pareto optimal frontier idea is probably extendable to deal with frontier movement, dealing with non-orthogonal dimensions, deconfounding dimensions, expanding or restricting dimensionality, and allowing transformations to iteratively "leak" into additional dimensions and change the degree of "orthogonality."

An additional point is that "relevant roles in government" should probably mean contracting work as well. So it's possible to go work for Raytheon, get a security clearance, and do cybersecurity work for government (and that pays significantly better!)

I think working at a top security company could be a way to gain a lot of otherwise hard to get experience. Trail of bits, NCC Group, FireEye are a few that come to mind.

This all sounds right to me, though I think some people have different views, and I'm hardly an expert. Speaking for myself at least, the things you point to are roughly why I wanted the "maybe" in front of "relevant roles in government." Though one added benefit of doing security in government is that, at least if you get a strong security clearance, you might learn classified helpful things about e.g. repelling state-originating APTs.

Our current best guess is that people who are interested should consider seeking security training in a top team in industry, such as by working on security at Google or another major tech company, or maybe in relevant roles in government (such as in the NSA or GCHQ). Some large security companies and government entities offer graduate training for people with a technical background. However, note that people we’ve discussed this with have had differing views on this topic.

This is a big area of uncertainty for me. I agree that Google & other top compa... (read more)

I think working at a top security company could be a way to gain a lot of otherwise hard to get experience. Trail of bits, NCC Group, FireEye are a few that come to mind.

One potential area of biorisk + infosec work would be in improving the biotech industry's ability to secure synthesis & lab automation technology from use in creating dangerous pathogens / organisms.

This could be done via circumventing existing controls (i.e. ordering a virus which is on a banned-sequence list), or by hijacking synthesis equipment itself. So protecting this type of infrastructure may be super important. I could see this being a more policy oriented role, but one that would require infosec skills.

I expect this work to be valuable ... (read more)