All of Denkenberger's Comments + Replies

I think that saving lives in a catastrophe could have more flow-through effects, such as preventing collapse of civilization (from which we may not recover), reducing the likelihood of global totalitarianism, and reducing the trauma of the catastrophe, perhaps resulting in better values ending up in AGI.

2
Vasco Grilo
4d
Thanks for the comment, David! I agree all those effects could be relevant. Accordingly, I assume that saving a life in catastrophes (periods over which there is a large reduction in population) is more valuable than saving a life in normal times (periods over which there is a minor increase in population). However, it looks like the probability of large population losses is sufficiently low to offset this, such that saving lives in normal time is more valuable in expectation.

I think the main reason that EA focuses relatively little effort on climate change is that so much money is going to it from outside of EA. So in order to be cost effective, you have to find very leveraged interventions, such as targeting policy, or addressing extreme versions of climate change, particularly resilience, e.g. ALLFED (disclosure, I'm a co-founder).

I have been recently asking around whether someone has compiled how much money is going into different ways of mitigating GCBRs, so this is quite relevant! Do you have estimates of the current EA (or otherwise) spending in these or similar buckets?

  1. Prevention: AI misuse, DNA synthesis screening, etc
  2. Suppression: Pathogen-agnostic early warning, planning for rapid response lockdowns, etc
  3. Containment: UV systems, P4E stockpiling, plans for keeping vital workers onsite, backup plans for providing food, energy and water non-industrially with low human contac
... (read more)
1
Conrad K.
16d
Yeah, great question! Lots of these categories were things I thought but ultimately had difficulties getting good estimates, so I don't have good answers here. But I can say a little bit more about what my impressions were for each. 1. AI-misuse is tough because I think lots of the work here is bucketed (usually implicitly) into AI safety spending which I wasn't looking at. Although I will say I struggled to find work at least explicitly focused on AI-bio that wasn't EA (usually OP) funded (e.g. RAND, CLTR). I think in turn, I capture a lot of this in my "GCBR Priority Areas" bucket. So at least as far as work that identifies as attempting to tackle this problem it's some % of this bucket (i.e. probably in the $1m-$10m order of magnitude, could be a fair bit less), but I don't think this reflects how much total is going towards biorisk from AI, which is much harder to get data on. 2. Yeah synthesis screening I definitely implicitly bucket into my "GCBR Priority Areas" category. I didn't attempt to break these down further because it'd be so much more work, though here's some thoughts: Synthesis screening is hard to get data on because I couldn't find out how the International Gene Synthesis Consortium is funded, and I think historically this represents most of the biosecurity and pandemic prevention work here. My best guess (but like barely more than 50% confidence) is that the firms who form it directly bear the costs. If true, then the work I could find outside this space that is philanthropically funded is NTI | bio / IBBIS and SecureDNA/MIT Media Lab. NTI spent ~$4.8m on their bio programs and has received OP funding. MIT Media Lab have received a number of OP grants, and SecureDNA list OP as their only philanthropic collaborator. This means the spend per year is probably in the $1m-$10m order of magnitude, most of which comes from EA. Though yes the IGSC remains a big uncertainty of mine. 2. I think breaking down disease surveillance into pathogen-agnostic

I think this is a very valuable project.

But this is still a combination of two questions, the latter of which longtermists have never, to my knowledge, considered probabilistically:[3]

  • What is the probability that the event kills all living humans?
  • What effect does the event otherwise have on the probability that we eventually reach an interstellar/existentially secure state, [4] given the possibility of multiple civilisational collapses and ‘reboots’? (where the first reboot is the second civilisation)

3^
The closest thing I know to such an attempt

... (read more)
2
Arepo
19d
Thanks for the kind words, David. And apologies - I'd forgotten you'd published those explicit estimates. I'll edit them in to the OP. My memory of WWOtF is that Will talks about the process, but other than giving a quick estimate of '90% chance we recover without coal, 95% chance with' he doesn't do as much quantifying as you and Luisa.  Also Lewis Dartnell talked about the process extensively in The Knowledge, but I don't think he gives any estimate at all about probabilities (the closest I could find was in an essay for Aeon where he opined that 'an industrial revolution without coal would be, at a minimum, very difficult').

Thanks for mentioning resilient foods! It is true that more food storage would give more time to scale up resilient foods. Stored food could be particularly valuable for some countries in loss of trade scenarios. Some have suggested that getting the World Trade Organization to change its rules would result in more food storage automatically. Still, I think the priority now is spending a few hundred million dollars total on resilient foods to research, pilot, and plan for them. If we extend your proposal for 20 years and for the world, then you are up to ~$... (read more)

I love the cumulative probability graph!

There is a little probability mass on things which are a reasonable fraction of the great or hellish futures — mostly corresponding to worlds in which the lightcone is divided in some way

  • Trade means that the probability of such outcomes isn’t so high, and I’ll set them aside for now; however, I think that this would be a natural place to extend this analysis

Let's say the positive side of your graph has a logarithmic horizontal axis. I think there would be some probability mass that we have technological stagnation an... (read more)

2
Owen Cotton-Barratt
1mo
Yeah I'm arguing that with good reflective governance we should achieve a large fraction of what's accessible. It's quite possible that that means "not quite all", e.g. maybe there are some trades so that we don't aestivate in this galaxy, but do in the rest of them; but on the aggregative view that's almost as good as aestivating everywhere.

I think the reviewer may be concluding from the above that, given no international food trade, calorie consumption would be much lower, and therefore increasing food production via new food sectors would become much more important relative to distribution. I agree with the former, but not the latter. Loss of international food trade is more of a problem of food distribution than production. If this increased thanks to new food sectors, but could not be distributed to low-income food-deficit countries (LIFDCs) due to loss of trade, there would still be many

... (read more)
2
Vasco Grilo
2mo
Thanks for the comments, David. I agree that is a factor, but I guess the distribution of the severity of catastrophes caused by nuclear war is not bimodal, because the following are not binary: * Awareness of mitigation measures: * More or less countries can be aware. * Any given country can be more or less aware. * Ability to put in practice the mitigation measures. * Export bans: * More or less countries can enforce them. * Any given country can enforce them more or less. In addition, I have the sense historical more local catastrophes are not bimodal, following distributions which more closely resemble a power law, where more extreme outcomes are increasingly less likely. Good point! I have updated the relevant bullet in the post: Now the probability of not fully recovering is 0.0513 %, i.e. 1.95 k (= 5.13*10^-4/(2.63*10^-7)) times as high as before. Yet, the updated unconditional existential risk (extinction caused by the asteroid and no full recovery afterwards) is still astronomically low, 3.04*10^-15 (= 5.93*10^-12*5.13*10^-4). So my point remains qualitatively the same. I have also added the 2nd sentence in the following bullet:

This was very helpful! I found the diagrams particularly useful. Visible lighting design for rooms has a similar problem of uniform illumination, but it is mitigated by the fact that there is significant reflection of the light, which I presume does not apply for far UVC. 
Has there been any work on planning to relocate existing UV systems to the most critical tasks, if an extreme pandemic hit soon, of making more super PPE/UV systems?

One unpublished study by a Russian academic and a CDC researcher allegedly estimated that the cost of 1 ACH by ventilat

... (read more)

Why is flesh weaker than diamond?

I don't think this is a fair comparison. If nature wanted skin to be harder, it can do that, for instance with scales (particularly hard in the case of turtle shells). Of course your logic explains why diamond is harder than bone. But if you want a small thing that could penetrate flesh, we already have it in the form of parasites.

One of the points in the book Strangers Drowning was that very dedicated altruists (some EAs included) live like it is war time all the time. Basically, the urgency of people dying from poverty, animals suffering, and humanity's future at risk demand the sacrifices that are typically reserved for war time. Another example is if existential risk were high, some argue that we should be on "extreme war footing" and dedicate a large portion of society's resources to reducing the risk. I'm interested in your perspective on these thoughts.

Thanks for the correction! I have fixed it and added a link (the link was in the main document, but it's good to have it in the executive summary as well).

1
OscarD
5mo
THanks!

This is a decent summary, but there are a couple corrections:

ALLFED increased paid team members, but much less than doubled (we have capacity to expand more quickly with additional funding).

We do have 17 advisory board members, but they represent 4 countries, not 9 (the 9 countries were represented by the 17 team members at the retreat).


 

Nice post!

The model does not predict much differences between the different scenarios until 2020-2030. Therefore, we only know that the model has not been falsified so far, but it is still unclear what the path is we are currently on.

I think it would be helpful to see an overlay of our actual trajectory. Though the absolute values of the models are not that different for the period 2000 to 2020, the slopes are quite different. I think there was a paper analyzing the fits including the slopes. The increase of production of food since the year 2000 has been ... (read more)

1
FJehn
5mo
Thanks David. I think the paper you are referring to might be the one I cited. At least Herrington also looked at the rate of change as well (Table 2). There you can see that the current trajectory and rate of change is most similar to the CT and the BAU2 scenario. CT being a scenario like you described (we innovated ourselves out of limits to growth), while BAU2 being a scenario where we are still on a collapse trajectory, but the resources of Earth are 2x of the default limits to growth scenario. Therefore, I would argue that we still can't tell if we just had more resources on Earth than originally estimated or if we solved our problem with innovation. But if you have another paper that discusses this as well, I'd be happy to read it. 

Thanks for all you have done!

Finally, EAs have treated EtG as increasingly more weird, especially offline, defeating the original argument for engaging.

This is very disappointing, especially because, if you disregard "still deciding", EtG was the second most popular route to impact among EAs in the 2022 survey.

(leading a - dare I say - successful effective nonprofit)

Sure - go ahead and dare. :)

My day job is associate professor of mechanical engineering at University of Canterbury in New Zealand, and I volunteer for ALLFED. Nearly 100% of my donations are to ALLFED. I think that ALLFED is the most cost-effective way of improving the long run future at the margin (see here and here, though I'm not quite as bullish as the mean survey/poll results in those papers), but there are orders of magnitude of uncertainty, and I think more total money should be put into AGI ... (read more)

As one who donates 50%, it doesn't seem like it should be that uncommon. One way I think about it is earning like upper-middle-class, living like middle-class, and donating like upper-class. Tens of percent of people work for tens of percent less money in sectors like nonprofits and governments. And I've heard of quite a few non-EAs who have taken jobs for half the money. And yet most people think about donating that large of a percent very differently than taking a job that pays less. I'm still not sure why - other than that it is uncommon or "weird." 

4
MvK
6mo
Could you share where you donate? I've always found it fascinating when people like you (leading a - dare I say - successful effective nonprofit) donate. * If you don't donate to ALLFED, why is that? (Are you hedging, are you actually not convinced it's the best giving opportunity out there...) * If you donate to ALLFED, what's the case for not just taking a lower salary? (Or is that what you do?)
6
AnonymousTurtle
6mo
Yeah, Giving isn’t demanding and the median annual UK salary is £26,800  

I agree that most academic research is a bad ROI but I find that a lot of this sort of 'nobody reads research' commentary is equating reads with citations which seems completely wrong. By that metric most forum posts would also not be read by anyone.

I agree-for one, the studies I've seen saying that the median publication is not cited are including conference papers, so if one is talking about the peer-reviewed literature, citations are significantly greater. I've estimated the average number of citations per paper is around 30 for the peer-reviewed litera... (read more)

The government could internalize this positive externality by providing incentives, like this.

1
Stan Pinsent
6mo
It could try, but so far no government has managed to meaningfully reverse fertility rates. Not many have tried very hard, so it may be possible.

I was assuming 50 % reduction in international trade, and 50 % of that reduction being caused by climatic effects, so only 25 % (= 0.5^2) caused by climatic effects. I have changed "50 % of it" to "50 % of this loss" in my original reply to clarify.

That makes sense. Thanks for putting the figure in! 

I guess famine deaths due to the climatic effects are described by a logistic function, which is a strictly increasing function, so I agree with the above. However, I guess the increase will be pretty small for low levels of soot.

If it were linear starting... (read more)

2
Vasco Grilo
6mo
You are right about that integral, but I do think that is the relevant BOTEC. What we care about is the mean death rate (for a given input soot distribition), not its integral. For example, for a uniform soot distribution ranging from 0 to 37.4 Tg (= 2*18.7), whose mean matches mine of 18.7 Tg[1], the middle points of the linear parts would be: * If the linear part started at 10.5 Tg, 7.27 % (= ((10.5 + 37.4)/2 - 10.5)/(18.7 - 10.5)*0.0443). * If the linear part started at 0 Tg, 10.1 % (= ((0 + 37.4)/2 - 0)/(18.7 - 10.5)*0.0443). So the mean death rates would be: * If the linear part started at 10.5 Tg, 5.23 % (= (10.5*0 + (37.4 - 10.5)*0.0727)/37.4). * If the linear part started at 0 Tg, 10.1 %. This suggests famine deaths due to the climatic effects would be 1.93 (= 0.101/0.0523) times as large if the linear part started at 0 Tg. Another way of running the BOTEC is considering an effective soot level, equal to the soot level minus the value at which the linear part starts. My effective soot level is 8.20 Tg (= 18.7 - 10.5), whereas it would be 18.7 Tg if the linear part started at 0 Tg, which suggests deaths would be 1.78 (= 18.7/10.5) times as large in the latter case. Using a logistic function instead of a linear one, I think the factor would be quite close to 1. The challenge here is that the logistic function f(x) = a + b/(1 + e^(-k(x - x_0))) has 4 parameters, but I only have 3 conditions, f(0) = 0, f(18.7) = 0.0443, f(+inf) = 1. I think this means I could define the 4th condition such that the logistic function stays near 0 until 10.5 Tg. Ideally, I would define the logistic function for f(0) = 0 and f(+inf) = 1, but then finding its parameters fitting it to the 16, 27, 37, 47 and 150 Tg cases of Xia 2022 for international food trade, all livestock grain fed to humans, and no household food waste. Then I would use f(18.7) as the death rate. Even better, I would get a distribution for the soot, generate N samples (x_1, x_2, ..., and x_N), and then

Very interesting!

  • Space colonies. Fertility is low in wealthy countries with large unsettled territories (Canada, Australia), even though they are far more hospitable than other planets. There is no reason to think that space colonies alone will reverse the fertility decline.

I think the incentive for fertility depends on the level of connection with the Earth. If it were fully independent from Earth, it would have a strong incentive to increase population because there are large economies of scale in terms of increasing the standard of living, including being able to create more living space per person, more advanced electronics, media, etc.

1
Stan Pinsent
6mo
Good point. I agree that the threat of "dying out" or of falling below some critical population threshold would probably be sufficient motivation for space colonies to avoid terminal decline. I'm less convinced that the more abstract theory that higher population => better QOL for all would significantly alter individuals' (selfish) cost/benefit calculations about raising a child.

Both sides targeted civilians in WWII. Hopefully that is not the case now, but I'm not sure.

>Half of the impact of the total loss of international food trade would cause 2.6% to >die according to Xia 2022. So why is it not 4.43%+2.6% = 7.0% mortality?

In my BOTEC with "arguably more reasonable assumptions", I am assuming just a 50 % reduction in international food trade, not 100 %.

That's why I only attributed half of the impact of total loss of international food trade. If I attributed all the impact, it would have been 4.43%+5.2% = 9.6% mortality. I don't see how you are getting 5.67% mortality.

My famine deaths due to the climatic effects a

... (read more)
2
Vasco Grilo
6mo
I was assuming 50 % reduction in international trade, and 50 % of that reduction being caused by climatic effects, so only 25 % (= 0.5^2) caused by climatic effects. I have changed "50 % of it" to "50 % of this loss" in my original reply to clarify. Yes, that is quite close to what I did. The lines you describe intersect at 10.5 Tg, but I used 11.3 Tg because I believe Xia 2022 overestimates the duration of the climatic effects. I was guessing this does not matter much because I think the famine deaths for 0 Tg for the following cases are similar: * No international food trade, and current food production. This matches the blue line of Fig. 5b I used to adjust the top line to include international food trade, and corresponds to 5.2 % famine deaths. * No international food trade, all livestock grain fed to humans, and no household food waste. This is the case I should ideally have used to adjust the top line, and corresponds to less than 5.2 % famine deaths. Since the 2nd case has less famine deaths, I am overestimating the effect of having international food trade, thus underestimating famine deaths. My guess for the effect being small stems from, in Fig. 5b, the cases for which there are climatic effects (5 redish lines, and 2 greyish lines) all seemingly converging as the soot injected into the stratosphere tends to 0 Tg: The convergence of the redish and greyish lines makes intuitive sense to me. If it was possible now to, without involving international food trade, decrease famine deaths by feeding livestock grain to humans or decreasing household food waste, I guess these would have already been done. I assume countries would prefer less famine deaths over greater animal consumption or household food waste. I guess famine deaths due to the climatic effects are described by a logistic function, which is a strictly increasing function, so I agree with the above. However, I guess the increase will be pretty small for low levels of soot. There are reasons

For arguably more reasonable assumptions of 50 % loss of international food trade, and 50 % of it being caused by the climatic effects, linearly interpolating, the increase in the death rate would be 25 % (= 0.5^2). So the new death rate would be 5.67 % (= 0.0443 + (0.0940 - 0.0443)*0.25), i.e. 1.28 (= 0.0567/0.0443) times my value.

Half of the impact of the total loss of international food trade would cause 2.6% to die according to Xia 2022. So why is it not 4.43%+2.6% = 7.0% mortality?

It is still the case that I would get a negative death rate inputting 5

... (read more)
2
Vasco Grilo
6mo
In my BOTEC with "arguably more reasonable assumptions", I am assuming just a 50 % reduction in international food trade, not 100 %. My famine deaths due to the climatic effects are a piecewise linear function which is null up to a soot injection into the stratosphere of 11.3 Tg. So, if one inputs 5 Tg into the function, the output is 0 famine deaths due to the climatic effects, not negative deaths. One gets negative deaths inputting 5 Tg into the pieces of the function respecting higher levels of soot because after a certain point (namely when everyone is fed), more food does not decrease famine deaths. My assumptions of no household food waste and feeding all livestock grain to humans would not make sense for low levels of soot, as I guess roughly everyone would be fed even without going all in these mitigation measures in those cases. In any case, I agree I am underestimating famine deaths due to the climatic effects for 5 Tg. My piecewise linear function is an approximation of a logistic function, which is always positive. I am happy to describe what happens in a very worst case scenario, involving no adaptations, and no international food trade. Eyeballing the bottom line of Figure 5b, the famine death rate due to the climatic effects for my 22.1 Tg would be around 25 %. In this case, the probability of 50 % famine deaths due to the climatic effects of nuclear war before 2050 would be 0.614 %, i.e. 1.87 k (= 0.00614/(3.29*10^(-6))) times as likely as my best guess. I must note that, under the above assumptions, activities related to resilient food solutions would have cost-effectiveness 0, as one would be assuming no adaptations. In general, I do not think it is obvious whether the cost-effectiveness of decreasing famine deaths due to the climatic effects at the margin increases/decreases with mortality. The cost-effectiveness of saving lives is negligible for negligible mortality and sufficiently high mortality, and my model assumes cost-effectiveness incre

In that case, I would only be overestimating the amount of soot by 10 %, which is a small factor in the context of the large uncertainty involved (my 95th percentile famine deaths due to the climatic effects is 62.3 times my best guess).

Do you mean underestimating? I agree that it's not that large of an effect.

For reference, maintaining my famine deaths due to climatic effects negligible up to an injection of soot into the stratosphere of 11.3 Tg, if I had assumed a total loss of international food trade fully caused by the climatic effects, I would have o

... (read more)
2
Vasco Grilo
6mo
Thanks! I have now changed "overestimating" to "underestimating". The BOTEC related to this in my comment had an error[1]. I have now corrected it in my comment above: It is still the case that I would get a negative death rate inputting 5 Tg into my formula. However, I am linearly interpolating, and the formula is only supposed to work for a mean stratospheric soot until the end of year 2 between 14.6 and 24.6 Tg, which excludes 5 Tg. I am approximating the logistic function describing the famine deaths due to the climatic effects as being null up to an injection of soot into the stratosphere of 11.3 Tg. From the legend of Figure 5: So my interpretation is that the blue line corresponds to no livestock grain fed to humans and current household food waste (in 2010), but without international food trade. I have clarified this in the post. Ideally, instead of adjusting the top line of Figure 5b to include international food trade, I would rely on scenarios accounting for both climatic effects and no loss of international food trade, but Xia 2022 does not present results for that. I am very open to different views about the famine death rate due to the climatic effects of a large nuclear war. My 95th percentile is 702 times my 5th percentile. 1. ^ In the expression "1 - (0.993 + (0.902 - 0.993)/(24.6 - 14.6)*(14.5 - 14.6))*0.948", 14.5 should have been 18.7. The calculation of the death rate in the post was correct, but it had the same typo in the formula, which I have now corrected.

I thought this was comprehensive, and it was clever how you avoided doing a Monte Carlo simulation for most of the variables. The expected amount of soot to the stratosphere was similar to my and Luisa's numbers for a large-scale nuclear war. So the main discrepancies are the expected number of fatalities and the impact on the long-term future.

From Figure 4 of Wagman 2020, the soot injected into the stratosphere for an available fuel per area of 5 g/cm^2 is negligible[14].

At 5 g/cm^2, Still most of soot makes it into the upper troposphere, so I think ... (read more)

3
bean
6mo
Because that kind of countervalue targeting isn't a thing.  I intend to write on this more, but there tends to be a lot of equivocation here between countervalue as "nuclear weapons fired at targets which are not strictly military" and countervalue as "nuclear weapons fired to kill as many civilians as possible".  The first kind absolutely exists, although I find the countervalue framing unhelpful.  The second doesn't in a large-scale exchange, because frankly there's no world in which you aren't better off aiming those same weapons at industrial targets.  You get a greater effect on the enemy's ability to make war, and because industrial targets tend to be in cities and have a lot of people around them, you will undoubtedly kill enough civilians to accomplish whatever can be accomplished by killing civilians, and the other side knows it.   The partial exception to this is if you're North Korea or equivalent, and don't have enough weapons to make a plausible dent in your opponent's industry.  In that case, deterrence through "we will kill a lot of your civilians" makes sense, but note that the US was pretty safely deterred by 6 weapons, which is way less than discussed here.  
2
Vasco Grilo
6mo
Thanks for commenting, David! I think this is true for your analysis (Denkenberger 2018), whose "median [soot injection into the stratosphere] is approximately 30 Tg" (and the mean is similar?). However, I do not think it holds for Luisa's post. My understanding is that Luisa expects an injection of soot into the stratosphere of 20 Tg conditional on one offensive nuclear detonation in the United States or Russia, not a large nuclear war. I expect roughly the same amount of soot (22.1 Tg) conditional on a large nuclear war (at least 1.07 k offensive nuclear detonations). Eyeballing the 3rd subfigure of Figure 4 of Wagman 2020, 90 % of the emitted soot is injected below: * 3.5 km for 1 g/cm^2. * 12.5 km for 5 g/cm^2. I got a fuel load of 3.07 g/cm^2 for counterforce. Linearly interpolating between the 2 1st data points above, I would conclude 90 % of the soot emitted due to counterforce detonations is injected below 8 km (= (3.5 + 12.5)/2; this is the value for 3 g/cm^2), and only 10 % above this height. It is also worth noting that not all soot going into the upper troposphere would go on to the stratosphere. Robock 2019 assumed only half did in the context of city fires in World War II: So I think the factor of 1/3 in your BOTEC should be lower, maybe 1/6? In that case, I would only be underestimating the amount of soot by 10 %, which is a small factor in the context of the large uncertainty involved (my 95th percentile famine deaths due to the climatic effects is 62.3 times my best guess). In addition, I suspect I am underestimating the amount of soot injected into the stratosphere from countervalue detonations due to assuming no overlap between their burned areas. Note that I am neglecting disruptions to international food trade caused by climatic effects not just because I expect infrastructure destruction to be the major driver of the loss of trade, but also to counteract other factors: For reference, maintaining my famine deaths due to climatic effects

See the reply to the first comment on that post. Paul's "most humans die from AI takeover" is 11%. There are other bad scenarios he considers, like losing control of the future, or most humans die for other reasons, but my understanding is that the 11% most closely corresponds to doom from AI.

4
Greg_Colbourn
7mo
Fair. But the other scenarios making up the ~50% are still terrible enough for us to Pause.

Paul Christiano argues here that AI would only need to have "pico-pseudokindness" (caring about humans one part in a trillion) to take over the universe but not trash Earth's environment to the point of uninhabitability, and that at least this is amount of kindness is likely.

4
Greg_Colbourn
7mo
Doesn't Paul Christiano also have a p(doom) of around 50%? (To me, this suggests "maybe", rather than "likely").

It is good that 80k is making simple videos to explain the risks associated with EA

Do you mean "risks associated with AI"?

1
Vaipan
7mo
Yes my bad!

Were these commenters expecting it to be much cheaper to save a life by preventing the loss of potential in an extinction, than to save a life using near-termist interventions?


I think that commenters are looking at the cost-effectiveness they could reach with current budget constraints. If we had way more money for longtermism, we could go to a higher cost per basis point. That is different than the value of reducing a basis point, which very well could be astronomical, given GiveWell costs for saving a life (though to be consistent, one should try to estimate the long-term impacts of a GiveWell intervention as well).

A nuclear war into a supervolcano is just really unlikely.

A nuclear war happening at the same time as a supervolcano is very unlikely. However, it could take a hundred thousand years to recover population, so if the frequency of supervolcanic eruptions is roughly every 30,000 years, it's quite likely there would be one before we recover.

Plus if there were 1000 people then there would be so much human canned goods left over - just go to a major city and sit in a supermarket.

The scenario I'm talking about is one where the worsening climate and loss of techno... (read more)

Neglectedness in the classic sense.  Although not as crowded as climate change, there are other large organizations / institutions that address nuclear risk and have been working in this space since the early Cold War. 

I agree that the nuclear risk field as a whole is less neglected than AGI safety (and probably than engineered pandemic), but I think that resilience to nuclear winter is more neglected. That's why I think overall cost-effectiveness of resilience is competitive with AGI safety.

I'm not Matt, but I do work on nuclear risk. If we went down to 1000 to 10,000 people, recovery would take a long time, so there is significant chance of supervolcanic eruption or asteroid/comet impact causing extinction. People note that agriculture/cities developed independently, indicating it is high probability. However, it only happened when we had a stable moderate climate, which might not recur. Furthermore, the Industrial Revolution only happened once, so there is less confidence that it would happen again. In addition, it would be more difficult w... (read more)

4
Nathan Young
8mo
This feels too confident. A nuclear war into a supervolcano is just really unlikely. Plus if there were 1000 people then there would be so much human canned goods left over - just go to a major city and sit in a supermarket.  If a major city can support a million people for 3 days on its reserves it can support a 1000 people for 30 years.  Again, I'm not saying that I think it doesn't matter, but I think my answers are good reasons why it's less than AI

He did it because he felt good doing it, and also to be healthy. He started thinking more and more about meaninglessness in maintaining for long-term health.

I think it's also helpful to point out that we should be good Bayesians and not believe anything 100%. It seems to me plausible that in 20 years, AI may not change everything, but maybe we will be able to reverse aging (or maybe AI will change everything and we can upload our brains). With some chance of indefinite lifespan, I think some effort into health in the next 20 years even if one is relatively young could have a big expected value.

I would recommend patenting, but then committing to donate part of the profits. That has been my strategy.

Actually, I think they only simulate the fires, and therefore soot production, for 40 min. So you may well have a good point. I do not know whether it would be a difference by a factor of 10. Figure 6 of Reisner 2018 may be helpful to figure that out, as it contains soot concentration as a function of height after 20 and 40 min of simulation. Do the green and orange curves look like they are closely approaching stationary state?

Wow - only 40 minutes - my understanding is actual firestorms take hours. This graph is for the low loading case, which did not pr... (read more)

For the high fuel load of 72.62 g/cm^2, Reisner 2019 obtains a firestorm

Thanks for the correction. Unfortunately, there is no scale on their figure, but I'm pretty sure the smoke would be going into the upper troposphere (like Livermore finds). Los Alamos only simulates for a few hours, so that makes sense that hardly any would have gotten to the stratosphere. Typically it takes days to loft to the stratosphere. So I think that would resolve the order of magnitude disagreement on percent of soot making it into the stratosphere for a firestorm.

I think the s... (read more)

3
Vasco Grilo
8mo
Actually, I think they only simulate the fires, and therefore soot production, for 40 min: So you may well have a good point. I do not know whether it would be a difference by a factor of 10. Figure 6 of Reisner 2018 may be helpful to figure that out, as it contains soot concentration as a function of height after 20 and 40 min of simulation: Do the green and orange curves look like they are closely approaching stationary state? In that slower burn case, would the fuel continue to be consumed in a firestorm regime (which is relevant for the climatic impact)? It looks like the answer is no for the simulation of Reisner 2018: For the oxygen to penetrate, I assume the inner and outer radii describing the region on fire would have to be closer, but that would decrease the chance of the firestorm continuing. From Reisner 2019: Reisner 2019 also argues most soot is produced in a short time (emphasis mine): I do not think they model pyrolysis. Do you have a sense of how large would be the area in sufficiently high temperature and low oxygen for pyrolysis to occur, and whether it is an efficient way of producing soot? Good point! It is not an absolute worst case. On the other hand, they have more worst case conditions (emphasis mine):

Los Alamos: Even for a fuel loading of 72.62 g/cm^2, it is 6.21 % (= 0.196/3.158).

So basically, no matter how much fuel Los Alamos puts in, they cannot reproduce the firestorms that were observed in World War II. I think this is a red flag for their model (but in fairness, it is really difficult to model combustion - I've only done computational fluid dynamics modeling - combustion is orders of magnitude more complex).

3
Vasco Grilo
8mo
Thanks for commenting, David! For the high fuel load of 72.62 g/cm^2, Reisner 2019 obtains a firestorm: So, at least according to Reisner, firestorms are not sufficient to result in a significant soot ejection into the stratosphere. Based on this, as I commented above: For reference, this is what Reisner 2018 says about modelling combustion (emphasis mine): So my understanding is that they: * Are being pessimistic with respect to ignition probabilities, and production of soot from fuel. * Modelled the combustion of fuel, but not the chemical reaction describing the production of soot from fuel. Minor correction to my last comment. I meant:

Thanks for clarifying. If instead one uses a mean (though I do think the tails should be weighted more heavily) closer to Luisa's and my analysis of 30 Tg, then Xia predicts about 1.6 billion starvation fatalities and about 110 million direct fatalities (though this latter number would probably be higher because Xia assumes that all areas hit would firestorm, which I don't, so I think more area would be hit to produce that amount of soot to the stratosphere). This is pessimistic in that it assumes no international food trade, no planting location adaptatio... (read more)

This study found 5 billion dead, but this is an obvious overestimate for a realistic response to massive global cooling. I suspect the death estimates they give are out by at least two orders of magnitude, given the various unrealistic assumptions they use

50 million dying from starvation (more than 50 million would die from the direct impacts of the nuclear war) is possible with a ~90% reduction in non-adapted agriculture (with current applications of fertilizers, pesticides, etc), but trade, resilient foods, and subsidies would have to go very well. I hav... (read more)

4[anonymous]8mo
I was thinking of all of the assumptions, i.e. about the severity of the winter and the adaptive response.  Sorry if I'm being thick, but what do you mean by 'eating the seed corn' here?

Deep bunkers like that are expensive and rare, and even if the bunker itself survived, ground bursts are messy and would likely leave it inaccessible. 

There are thousands of underground mines in the US (14000 active mines, but many are surface mines), and I think it would only require 1 or a few to store thousands of nuclear weapons. Maybe the weapons would be spread out over many mines. It would not be feasible to make thousands of mines inaccessible. 

Missile warheads are only of use as a source of raw materials, and while you might be able to g

... (read more)
8
bean
8mo
OK, remember that we're dealing with nuclear weapons, which inspire governments to levels of paranoia you maybe see when dealing with crypto.  Dropping a dozen nukes down a mine somewhere is not going to happen without a lot of paperwork and armed guards and a bunch of security systems.  And those costs don't really scale with number of warheads.  Sure, if you were trying to disperse the stockpile during a period of rising tension, you could take a few infantry companies and say "hang the paperwork".  But that requires thinking about the weapons in a very different way from how they actually do, and frankly they wouldn't be all that useful even if you did do that, because of the other problems with this plan. Yes, I am.  The first commandment of nuclear weapons design since 1960 or so has been "it must not go off by accident".  So a modern missile warhead has an accelerometer which will not arm it unless it is pretty sure it has been fired by the relevant type of missile.  And trying to bypass it is probably a no-go.  The design standard is that one of the national labs couldn't set a US warhead off without the codes, so I doubt you can easily bypass that. A modern bomber is a very complex machine, and the US hasn't set ours up to keep working out of what could survive a nuclear exchange.  (This is possible, but would require mobile servicing facilities and drills, which we do not have.)  Not to mention that they can't make a round-trip unrefueled from CONUS to any plausible enemy, and the odds of having forward tankers left are slim to none.

First, remind me why we're looking at 1500 countervalue weapons?  Do we really expect them to just ignore the ICBM silos?  

My understanding is that the warning systems are generally designed such that the ICBMs could launch before the attacking warheads reach the silos. I do have significant probability on counterforce scenarios, but I can't rule out counter value scenarios, so I think it's an important question to estimate what would happen in these counter value scenarios.

 

Possibly the single most important goal of the deployed warheads is

... (read more)
8
bean
8mo
Even leaving aside the ICBMs, "countervalue" was one of McNamara's weird theories, and definitely wouldn't be implemented as a pure thing.  If nothing else, a lot of those warheads are going after military targets, not cities. Maybe if they were targeted specifically with that goal in mind, but again, that seems unlikely, particularly with modern guidance systems.  You'll do better for yourself shooting at specific things rather than asking "how many civilians can we kill"?  A lot of those will be far away from cities, or will have overlap with something else nearby that is reasonably hard and also needs to die. I might be misreading it, but that paper seems to bury a lot of the same assumptions that I'm objecting to.  They assume a firestorm will form as part of the basis of how the fire is modeled, and then explicitly take the 5 Tg of stratospheric soot per 100 fires number and use that as the basis for further modeling.  For other fuel loadings, the amount of soot in the stratosphere is linear with fuel loading, which is really hard to take seriously in the face of the "wildfires are different" assertion.  Sure, they accurately note that there are a lot of assumptions in the usual Turco/Toon/Robock model and talk a good game about trying to deal with all four parts of the problem, then go and smuggle in the same assumptions.  Points for halving the smoke duration, I guess. Edit: Deep bunkers like that are expensive and rare, and even if the bunker itself survived, ground bursts are messy and would likely leave it inaccessible.  Also, there's the problem of delivering the warheads to the target in an environment where a lot of infrastructure is gone.  Missile warheads are only of use as a source of raw materials, and while you might be able to get gravity bombs to bombers, you wouldn't get many, and probably couldn't fly all that many sorties anyway.  It's a rounding error, and I'm probably being generous in using that to cancel out the loss of deployed warhea

US suburbs may have a lot of building mass in aggregate, but it's also really spread out and generally doesn't contain that much which is likely to draw nuclear attack.

There are only 55 metropolitan areas in the US with greater than 1 million population. Furthermore, the mostly steel/concrete city centers are generally not very large, so even with a nuclear weapon targeted at the city center, it would burn a significant amount of suburbs. So with 1500 nuclear weapons countervalue even spread across NATO, a lot of the area hit would be suburbs.

Yeah, sorry,

... (read more)
4
bean
8mo
First, remind me why we're looking at 1500 countervalue weapons?  Do we really expect them to just ignore the ICBM silos?  Second, note that there's a difference between "a lot of the area hit would be suburbs" and "a lot of the suburbs would be hit".  The US has a vast amount of suburbs, and the areas damaged by nuclear weapons would be surprisingly small. Let me repeat.  I am not interested in anything Turco, Toon et al have to say.  They butchered the stuff I can check badly.  As such, I do not think it is good reasoning to believe them on the stuff I can't.  The errors outlined in the OP are not the sort of thing you can make in good faith.  They are the sort of thing you'd do if you were trying to keep your soot number up in the face of falling arsenals. Re firestorms more broadly, I don't see any reason to assume those would routinely form.  It's been a while since I looked into this, but those are harder to generate than you might think when that's the goal, and I don't think it's likely to be a goal of any modern targeting plan.  The only sophisticated model I've seen is the one by the Los Alamos team, which got about 70% of the soot production that Robock et al did, and only 12% of that reached the stratosphere.  That's where my money is.

It argues Toon 2008 has overestimated the soot ejected into the stratosphere following a nuclear war by something like a factor of 191[1] (= 1.5*2*2*(1 + 2)/2*(2 + 3)/2*(4 + 13)/2).

I think a geometric mean would be more appropriate, so (48*468)^0.5 = 150. But I disagree with a number of the inputs.

They also assume 4,400 warheads from the US and Russia alone, significantly higher than current arsenals.

Current US + Russia arsenals are around 11,000 warheads, but current deployed arsenals are only about 3000. With Putin pulling out of New START, many nuc... (read more)

>Current US + Russia arsenals are around 11,000 warheads, but current deployed arsenals are only about 3000. With Putin pulling out of New START, many nuclear weapons that are not currently deployed could become so.

Possibly the single most important goal of the deployed warheads is to stop the other side from deploying their warheads, both deployed and non-deployed.  Holding to deployed only is probably a reasonable assumption given that some of the deployed will not make it, and most of the non-deployed definitely won't.  And this was written... (read more)

A counterfactual marginal multiplier of z means the effective giving organisation would have caused z $ of donations for each additional dollar it had spent...

The effective giving organisations is underfunded if z < 1, as long as the counterfactual marginal multiplier includes all relevant effects.

Do you mean z > 1?


 

2
Vasco Grilo
10mo
Thanks, David! Corrected.

I agree that Will's statement is correct for the near term. But Will also said that his vision is that, like science is the agreed way of getting to the truth, EA should be the agreed way of getting to the good. I think that would imply that EA has become a mass movement.

Thanks for the link. This shows that 3% of global wealth is in billionaires. Though richer people generally give a larger percent of their income, it's not clear they give a larger percent of their wealth. This is because many people with near zero wealth still have significant income, and still donate to charity. So I would guess ~3% of donations from individuals/foundations would be from billionaires. Corporations you point out are 6% of the US total. It's not clear to me how to classify this, but generously you could go with market capitalization. I wou... (read more)

4
Jason
10mo
I think Will's statement is mostly correct with the background of who the existing donors are. How much billionaires (and near-billionaires) donate as a percentage of their wealth in general is much less important to assessing his claim than what the specific billionaires and near-billionaires on board intend to donate.  Even for GiveWell, which has a significantly easier road to being a mass movement than most of EA/EA-adjacent work, over half of its revenue came from 18 donors out of 41,862 [p. 18 of https://files.givewell.org/files/metrics/GiveWell_Metrics_Report_2021.pdf] even before one considers that over half of its impact came from direct-to-charity grants from Open Phil not included in those numbers. Over half of the total donors were in the under-$1,000 bucket, so it's not that small donors weren't present. Of course, the centralization of funding would be less pronounced in a true mass movement. But mass movements take a lot of time and energy to cultivate all those small/mid-size donors . . .

Wealth is heavily fat-tailed, so it’s very likely that one or a small number of funders end up accounting for most funding.


Most philanthropy is not from billionaires, so the fact that most EA philanthropy is from billionaires means that EA has been unusually successful at recruiting billionaires. This could continue, or it could mean revert. So I do think there is hope for more funding diversification.

6
Jason
10mo
That's true, but it is pretty fat-tailed. These statistics don't break down by wealth, but you've got about one-quarter of US charitable giving coming from foundations and corporations.  The individuals slice isn't broken down. However, we can suspect that the ~ 30% of total contributions given to religious organizations came predominately from individuals, meaning that the concentration of non-religious charitable giving is probably higher than these numbers would suggest.

People can do good research on even less than 30k USD a year at CEEALAR (EA Hotel).

I'm curious how you would count endowments. For instance, Princeton has an endowment equal to about 10 years of expenditure, and about 17 years of expenditure net of non-philanthropic income. My understanding is that most of this would be restricted, e.g. to scholarships or athletics. So if 20% were unrestricted, would that mean you would calculate 2 or 3.4 years of unrestricted runway? 

4
Jason
10mo
Endowments are usually restricted by law as to how much you can withdraw in a year, so I assume a legal inability to spend the bulk of the money in the next few years would cause SoGive to exclude the endowment.

Wouldn't interstellar travel close to the speed of light require a huge amount of energy, and a level of technological transformation that again seems much higher than most people expect?

Not really - about six hours of the energy produced by the sun. If molecular manufacturing could double every day (many bacteria double much faster), we would get there very fast.

Load more