Dr. David Denkenberger co-founded and directs the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 143 publications (>4800 citations, >50,000 downloads, h-index = 36, second most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, and University College London.
Referring potential volunteers, workers, board members and donors to ALLFED.
Being effective in academia, balancing direct work and earning to give, time management.
I like how comprehensive this is.
Note that a one in a millennium eruption together with a once in a century plague, like the Plague of Justinian still wasn’t enough to cause existential risk (humans aren’t extinct yet), though the ensuing little ice age could arguably be categorized as a catastrophic risk.
Minor, but existential risk includes more than extinction. So it could be "humans haven't undergone an unrecoverable collapse yet (or some other way of losing future potential)."
I don’t know of any good data source on how many people are currently earning to give, but our internal data at GWWC suggests it could be at least 100s (also depending on which definition you use)
According to the 2022 EA survey, out of 3270 people who answered, 335 people were earning to give. Since there are a lot more EAs than 3270, I think it would be more like a thousand people who are earning to give. But it’s true they might not be using the 80k definition:
Current definition 80k: “We say someone is earning to give when they:
- Work a job that’s higher earning than they would have otherwise but that they believe is morally neutral or positive
- Donate a large fraction of the extra earnings, typically 20-50% of their total salary
- Donate to organisations they think are highly effective (i.e. funding-constrained organisations working on big, neglected global problems)”
I agree with you that it should not have to be a different job, but I disagree that 20% is too low. There are many (most?) EAs who do not have a direct high-impact career or do a lot of high-impact volunteering. So roughly the other way of having impact is earning to give, and if people can give 10%, I think that should qualify.
That's interesting to think about the transition from early agriculture/pastoralism to pre-industrial society. The analyses I've seen focus on just recovering agriculture and/or industry. Do you think in between could be a significant bottleneck or it would just take time? Not a peer-reviewed study, but there were some estimates of future recovery times here.
Existential catastrophe, annual | 0.30% | 20.04% | David Denkenberger, 2018 |
Existential catastrophe, annual | 0.10% | 3.85% | Anders Sandberg, 2018 |
You mentioned how some of the risks in the table were for extinction, rather than existential risk. However, the above two were for the reduction in long-term future potential, which could include trajectory changes that do not qualify as existential risk, such as slightly worse values ending up in locked-in AI. Also another source by this definition was the 30% reduction in long-term potential from 80,000 Hours' earlier version of this profile. By the way, the source attributed to me was based on a poll of GCR researchers - my own estimate is lower.
Thanks for all you two do! If you don't mind me asking, how does the return on your investments factor in? E.g., is the negative savings offset by return such that your net worth is not falling?