Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on his patented expanded microchannel heat exchanger. He is an assistant professor at University of Alaska Fairbanks in joint in mechanical engineering and Alaska Center for Energy and Power. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 124 publications (>3000 citations, >50,000 downloads, h-index = 28, third most prolific author in the existential/global catastrophic risk field (https://www.x-risk.net/)), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 200 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast twice (https://80000hours.org/podcast/episodes/david-denkenberger-allfed-and-feeding-everyone-no-matter-what/ and https://80000hours.org/podcast/episodes/david-denkenberger-sahil-shah-using-paper-mills-and-seaweed-in-catastrophes/ ) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
Referring potential volunteers, workers, board members and donors to ALLFED.
Being effective in academia, balancing direct work and earning to give, time management.
It is good that 80k is making simple videos to explain the risks associated with EA
Do you mean "risks associated with AI"?
Were these commenters expecting it to be much cheaper to save a life by preventing the loss of potential in an extinction, than to save a life using near-termist interventions?
I think that commenters are looking at the cost-effectiveness they could reach with current budget constraints. If we had way more money for longtermism, we could go to a higher cost per basis point. That is different than the value of reducing a basis point, which very well could be astronomical, given GiveWell costs for saving a life (though to be consistent, one should try to estimate the long-term impacts of a GiveWell intervention as well).
A nuclear war into a supervolcano is just really unlikely.
A nuclear war happening at the same time as a supervolcano is very unlikely. However, it could take a hundred thousand years to recover population, so if the frequency of supervolcanic eruptions is roughly every 30,000 years, it's quite likely there would be one before we recover.
Plus if there were 1000 people then there would be so much human canned goods left over - just go to a major city and sit in a supermarket.
The scenario I'm talking about is one where the worsening climate and loss of technology means they would not be enough food, so the stored food would be consumed quickly. Furthermore, edible wild species including fish may be eaten to extinction.
Again, I'm not saying that I think it doesn't matter, but I think my answers are good reasons why it's less than AI
I agree that more total money should be spent on AGI safety than nuclear issues. However, resilience to sunlight reduction is much more neglected than AGI safety. That's why the Monte Carlo analyses found that the cost-effectiveness of resilience to loss of electricity (e.g. high-altitude detonations of nuclear weapons causing electromagnetic pulses) and resilience to nuclear winter are competitive with AGI safety.
Neglectedness in the classic sense. Although not as crowded as climate change, there are other large organizations / institutions that address nuclear risk and have been working in this space since the early Cold War.
I agree that the nuclear risk field as a whole is less neglected than AGI safety (and probably than engineered pandemic), but I think that resilience to nuclear winter is more neglected. That's why I think overall cost-effectiveness of resilience is competitive with AGI safety.
I'm not Matt, but I do work on nuclear risk. If we went down to 1000 to 10,000 people, recovery would take a long time, so there is significant chance of supervolcanic eruption or asteroid/comet impact causing extinction. People note that agriculture/cities developed independently, indicating it is high probability. However, it only happened when we had a stable moderate climate, which might not recur. Furthermore, the Industrial Revolution only happened once, so there is less confidence that it would happen again. In addition, it would be more difficult with depleted fossil fuels, phosphorus, etc. Even if we did recover industry, I think our current values are better than randomly chosen values (e.g. slavery might continue longer or democracy be less prevalent).
He did it because he felt good doing it, and also to be healthy. He started thinking more and more about meaninglessness in maintaining for long-term health.
I think it's also helpful to point out that we should be good Bayesians and not believe anything 100%. It seems to me plausible that in 20 years, AI may not change everything, but maybe we will be able to reverse aging (or maybe AI will change everything and we can upload our brains). With some chance of indefinite lifespan, I think some effort into health in the next 20 years even if one is relatively young could have a big expected value.
I would recommend patenting, but then committing to donate part of the profits. That has been my strategy.
Actually, I think they only simulate the fires, and therefore soot production, for 40 min. So you may well have a good point. I do not know whether it would be a difference by a factor of 10. Figure 6 of Reisner 2018 may be helpful to figure that out, as it contains soot concentration as a function of height after 20 and 40 min of simulation. Do the green and orange curves look like they are closely approaching stationary state?
Wow - only 40 minutes - my understanding is actual firestorms take hours. This graph is for the low loading case, which did not produce a firestorm. The lines do look similar for 20 and 40 minutes, but I don't think it's the case we are interested in. They claim only the fine material that burns rapidly contributes, but I just don't think that is the case with actual firestorms. The 2018 was with low loading, and most of the soot is in the lower troposphere (at least after 40 minutes), so the question is when they actually did find a firestorm, what is the vertical soot distribution? For Livermore, it was mostly upper troposphere. Los Alamos did recognize that they were not doing latent heat release even in the 2019 simulation. I think this is quite important, because it's the reason that thunderstorms go to the upper troposphere (and sometimes even stratosphere). It's been a while since I took geophysical fluid dynamics, but the argument that the initial plume would stabilize the atmosphere seems off to me. If we look at the example of night in the atmospheric boundary layer (lower ~1 km), the surface cools radiatively, so you get stratification (stable). But when the sun comes up, it warms the surface of the earth, and you get thermals, and this upward convection actually destabilizes the boundary layer. Now it is true if you have a fire in a room that the hot gases can go to the ceiling and stabilize the air in the room. But if they are arguing that the plume only goes up a few kilometers (at least for the non-firestorm case), it seems like in those few kilometers, the potential temperature would be more equalized, so overall less stability. Even if that's not the case, the plume has hardly even reached the upper troposphere, so there would be hardly any change in stability there. In addition, if the simulation is run over hours, then new atmosphere could come into place that has the same old stability. So I think the Livermore results are more reasonable.
For the high fuel load of 72.62 g/cm^2, Reisner 2019 obtains a firestorm
Thanks for the correction. Unfortunately, there is no scale on their figure, but I'm pretty sure the smoke would be going into the upper troposphere (like Livermore finds). Los Alamos only simulates for a few hours, so that makes sense that hardly any would have gotten to the stratosphere. Typically it takes days to loft to the stratosphere. So I think that would resolve the order of magnitude disagreement on percent of soot making it into the stratosphere for a firestorm.
I think the short run time could also explain the strange behavior of only a small percent of the material burning at high loading (only ~10%). This was because the oxygen could only penetrate to the outer ring, but if they had run the model longer, most of the fuel would have eventually been consumed. Furthermore, I think a lot of smoke could be produced even without oxygen via pyrolysis because of the high temperatures, but I don't think they model that.
Los Alamos 2019 "We contend that these concrete buildings will not burn readily during a fire and are easily destroyed by the blast wave—significantly reducing the probability of a firestorm."
According to this, 20 psi blast is required to destroy heavily built concrete buildings, and that does not even occur on the surface for an airburst detonation of 1 Mt (if optimized to destroy residential buildings). The 5 psi destroys residential buildings that are typically wood framed. And it is true that the 5 psi radius is similar to the burn radius for a 15 kt weapon. But knocking down wooden buildings doesn't prevent them from burning. So I don't think Los Alamos' logic is correct even for 15 kt, let alone 400 kt where the burn radius would be much larger than even the residential blast destruction radius (Mike's diagram above).
Los Alamos 2018: "Fire propagation in the model occurs primarily via convective heat transfer and spotting ignition due to firebrands, and the spotting ignition model employs relatively high ignition probabilities as another worst case condition."
I think they ignore secondary ignition, e.g. from broken natural gas lines or existing heating/cooking fires spreading, the latter of which is all that was required for the San Francisco earthquake firestorm, so I don't think this could be described as "worst case."
Paul Christiano argues here that AI would only need to have "pico-pseudokindness" (caring about humans one part in a trillion) to take over the universe but not trash Earth's environment to the point of uninhabitability, and that at least this is amount of kindness is likely.