Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 143 publications (>4800 citations, >50,000 downloads, h-index = 36, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
Referring potential volunteers, workers, board members and donors to ALLFED.
Being effective in academia, balancing direct work and earning to give, time management.
Existential catastrophe, annual | 0.30% | 20.04% | David Denkenberger, 2018 |
Existential catastrophe, annual | 0.10% | 3.85% | Anders Sandberg, 2018 |
You mentioned how some of the risks in the table were for extinction, rather than existential risk. However, the above two were for the reduction in long-term future potential, which could include trajectory changes that do not qualify as existential risk, such as slightly worse values ending up in locked-in AI. Also another source by this definition was the 30% reduction in long-term potential from 80,000 Hours' earlier version of this profile. By the way, the source attributed to me was based on a poll of GCR researchers - my own estimate is lower.
The conventional wisdom is that a crisis like this leads to a panic-neglect cycle, where we oversupply caution for a while, but canāt keep it up. This was the expectation of many people in biosecurity, with the main strategy being about making sure the response wasnāt too narrowly focused on a re-run of Covid, instead covering a wide range of possible pandemics, and that the funding was ring-fenced so that it couldnāt be funnelled away to other issues when the memory of this tragedy began to fade.But we didnāt even see a panic stage: spending on biodefense for future pandemics was disappointingly weak in the UK and even worse in the US.
Have you seen data on spending for future pandemics before COVID and after?
We do not claim to be an x-risk cause area.
I think thatās reasonable that biodiversity loss is unlikely to be an existential risk. However, existential risks could significantly impact biodiversity. Abrupt sunlight reduction scenarios such as nuclear winter could cause extinction of species in the wild, which could potentially be mitigated by keeping the species alive in zoos if there were sufficient food. These catastrophes plus other catastrophes such as those that disrupt infrastructure like extreme pandemic causing people to be too fearful to show up to work in critical industries, could cause desperate people hunting species to extinction. But I think the biggest threat is AGI, which could wipe out all biodiversity. Then again, if AGI goes well, it may be able to resurrect extinct species. So it could be that the most cost-effective way of preserving biodiversity is working on AGI safety.
I think that saving lives in a catastrophe could have more flow-through effects, such as preventing collapse of civilization (from which we may not recover), reducing the likelihood of global totalitarianism, and reducing the trauma of the catastrophe, perhaps resulting in better values ending up in AGI.
That's very helpful. Do you have a rough idea proportions within creating a better future, e.g. climate, nuclear, bio, and AI?