Denkenberger

1835Fairbanks, AK, USAJoined Apr 2015

Bio

Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on his patented expanded microchannel heat exchanger. He is an assistant professor at University of Alaska Fairbanks in joint in mechanical engineering and Alaska Center for Energy and Power. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 124 publications (>3000 citations, >50,000 downloads, h-index = 28, third most prolific author in the existential/global catastrophic risk field (https://www.x-risk.net/)), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 200 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast twice (https://80000hours.org/podcast/episodes/david-denkenberger-allfed-and-feeding-everyone-no-matter-what/ and https://80000hours.org/podcast/episodes/david-denkenberger-sahil-shah-using-paper-mills-and-seaweed-in-catastrophes/ ) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.

Comments
561

Thanks for the clarification. I would say this is quite optimistic, but I look forward to your future cost-effectiveness work.

I've been wondering about cost-effectiveness in this space for a long time, so thanks for writing this and especially for releasing the quantitative model! At the top, it looks like you are saying that $100 million per year for 10 years could reduce x risk by about one percentage point, meaning about 100 basis points (0.01%) per billion dollars-is that correct?  In the model column AF in tab X-risk, you say that the effort would be over a century, so does that mean spending $10 billion total? Elsewhere you say you consider spending $100 million per year just on one institution, so are you really talking about spending $100 billion this century on the top 10 institutions? Then this would be about 1 basis point per $billion. This is in the range of cost effectiveness values collected here.

So if I'm understanding you correctly, $10 billion spent over the century would reduce the existential risk from the US Government Executive Office of the President by 50%, and 40% of that would have happened anyway, so you are reducing overall existential risk by 0.135%, which would be 1.35 basis points per $billion?

I agree - I personally really appreciate humor on the EA Form. Also, EA is generally not cold and calculating, it is "warm and calculating."

Interesting post. I recently asked the question whether anyone had quantified the percent of tasks that computers are superhuman at as a function of time - has anyone?

A lot of the decline in global GDP growth rate is due to lower population growth rate. But it is true that the GDP per capita growth in “frontier” (developed) countries has been falling.

“Electric motors, pumps, battery charging, hydroelectric power, electricity transmission — among many other things — operate at near perfect efficiency (often around 90%).”

This is seriously cherry picked. Anders Sandberg is writing a book called Grand Futures, and he talks about where fundamental limits lie. In almost all cases, we are far away from fundamental limits. For instance, if it is 10°C colder outside than inside, the fundamental limit on efficiency of a heat pump is about 3000% (coefficient of performance of 30). However, we are now at around a coefficient of performance of four. The conversion of solar energy to food is around 0.1% efficient for the typical field, and an order of magnitude lower than that for animal systems. Comminution (breaking up rock) is only about 1% efficient.

This decline is arguably what one would expect given the predicted end of Moore’s law, projected to hit the ceiling around 2025. In particular, in approaching the end of Moore’s law, we should arguably expect progress to steadily decline as we get closer to the ceiling, rather than expecting a sudden slowdown that only kicks in once we reach 2025 (or thereabout).

Progress on other measures has also greatly slowed down, such as processor clock speeds, the cost of hard drive storage (cf. Kryder’s law), and computations per joule (cf. Koomey’s law).

Irreversible computing could be about five orders of magnitude more efficient than what we do now, and reversible computing could be many orders of magnitude more efficient than that.

On moral hazard, I did some analysis in a journal article of ours:

"Moral hazard would be if awareness of a food backup plan makes nuclear war more likely or more intense. It is unlikely that, in the heat of the moment, the decision to go to nuclear war (whether accidental, inadvertent, or intentional) would give much consideration to the nontarget countries. However, awareness of a backup plan could result in increased arsenals relative to business as usual, as awareness of the threat of nuclear winter likely contributed to the reduction in arsenals [74]. Mikhail Gorbachev stated that a reason for reducing the nuclear arsenal of the USSR was the studies predicting nuclear winter and therefore destruction outside of the target countries [75]. One can look at how much nuclear arsenals changed while the Cold War was still in effect (after the Cold War, reduced tensions were probably the main reason for reduction in stockpiles). This was ~20% [76]. The perceived consequences of nuclear war changed from hundreds of millions of dead to billions of dead, so roughly an order of magnitude. The reduction in damage from reducing the number of warheads by 20% is significantly lower than 20% because of marginal nuclear weapons targeting lower population and fuel loading density areas. Therefore, the reduction in impact might have been around 10%. Therefore, with an increase in damage with the perception of nuclear winter of approximately 1000% and a reduction in the damage potential due to a smaller arsenal of 10%, the elasticity would be roughly 0.01. Therefore, the moral hazard term of loss in net effectiveness of the interventions would be 1%."

Also, as Aron pointed out, resilience protects against other catastrophes, such as supervolcanic eruptions and asteroid/comet impacts. Similarly, there is some evidence that people drive less safely if they are wearing a seatbelt, but overall we are better off with a seatbelt. So I don't think moral hazard is a significant argument against resilience.

I think direct cost-effectiveness analyses like this journal article are more robust, especially for interventions, than Importance, Neglectedness and Tractability. But it is interesting to think about tractability separately. It is true that there is a lot of uncertainty of what the environment would be like post catastrophe. However, we have calculated that resilient foods would greatly improve the situation with and without global food trade, so I think they are a robust intervention. Also, I think if you look at the state of resilience to nuclear winter pre-2014, it was basically to store up more food, which would cost tens of trillions of dollars, would not protect you right away, and if you did it fast, it would raise prices and exacerbate current malnutrition. In 2014, we estimated that resilient foods could be scaled up to feed everyone technically. And then in the last eight years, we have done research estimating that it could also be done affordably for most people. So I think there has been a lot of progress with just a few million dollars spent, indicating tractability.

I mostly disagree on the point about skillsets: I think both intervention targets (focus on tail risks vs. preventing any nuclear deployment) are big enough to require input from people with very diverse skillsets, so I think it will be relatively rare for a person to be able to only meaningfully contribute to either of the two. In particular, I believe that both problems are in need of policy scholars, activists, and policymakers and a focus on the preparation side might lead people in those fields to focus less on the preventing any kind of nuclear deployment goal.

I think that Aron was talking about prevention versus resilience. Resilience requires more engineering.

You seem to be not considering global catastrophic risk. This would generally not cause extinction, but could cause collapse of civilization from which we may not recover. And even if we do recover, we may end up losing significant fractions of long-term value. And it even if there's not a collapse of civilization, it could make global totalitarianism more likely, or worse values could end up in AI. At least some of these could be considered existential risk in the sense that much of the long-term value is lost. And yet preventing or mitigating them can generally be justified based on saving lives in the present generation.

Here are some ALLFED resources, here are mainly non-ALLFED resources, a couple of which I'd like to particularly highlight: Defence in Depth and The Knowledge. There is also Luisa Rodriguez's work, e.g. this.

By the way, it looks like the comment is now heavily upvoted. I've seen this happen quite a few times, so it seems like it might be good to withhold judgment about the net votes for a day or two. But of course it could be that it became highly upvoted because of reactions like this, so I'm not sure what the best course of action is.

Get another billionaire donor. Presumably, this is hard because otherwise EA would've done it already, but there might be factors that are hidden from me.

 

It's a process to recruit billionaires/turn EAs into billionaires, but one estimate was another 3.5 EA billionaires by 2027 (written pre FTX implosion). In the analyses I've seen for last dollar cost effectiveness, they have tended to ignore the possibility of EA adding funds over time. Of course we don't want to run out of money just when we need some big surge. But we could spend a lot of money in the next five years and then reevaluate if we have not recruited significant additional assets. This could make a lot of sense for people with short AI timelines (see here for an interesting model) or for people who are worried about the current nuclear risk. But more generally, by doing more things now, we can show concrete results, which I think would be helpful in recruiting additional funds. I may be biased as I head ALLFED, but I think the optimal course of action for the long-term future is to maintain the funding rate that was occurring in 2022, and likely even increase it.

I will be starting an associate professor position in mechanical engineering at the University of Canterbury in January 2023.

Load More