Dr. David Denkenberger co-founded and is a director at the Alliance to Feed the Earth in Disasters (ALLFED.info) and donates half his income to it. He received his B.S. from Penn State in Engineering Science, his masters from Princeton in Mechanical and Aerospace Engineering, and his Ph.D. from the University of Colorado at Boulder in the Building Systems Program. His dissertation was on an expanded microchannel heat exchanger, which he patented. He is an associate professor at the University of Canterbury in mechanical engineering. He received the National Merit Scholarship, the Barry Goldwater Scholarship, the National Science Foundation Graduate Research Fellowship, is a Penn State distinguished alumnus, and is a registered professional engineer. He has authored or co-authored 156 publications (>5600 citations, >60,000 downloads, h-index = 38, most prolific author in the existential/global catastrophic risk field), including the book Feeding Everyone no Matter What: Managing Food Security after Global Catastrophe. His food work has been featured in over 25 countries, over 300 articles, including Science, Vox, Business Insider, Wikipedia, Deutchlandfunk (German Public Radio online), Discovery Channel Online News, Gizmodo, Phys.org, and Science Daily. He has given interviews on 80,000 Hours podcast (here and here) and Estonian Public Radio, Radio New Zealand, WGBH Radio, Boston, and WCAI Radio on Cape Cod, USA. He has given over 80 external presentations, including ones on food at Harvard University, MIT, Princeton University, University of Cambridge, University of Oxford, Cornell University, University of California Los Angeles, Lawrence Berkeley National Lab, Sandia National Labs, Los Alamos National Lab, Imperial College, Australian National University, and University College London.
Referring potential volunteers, workers, board members and donors to ALLFED.
Being effective in academia, balancing direct work and earning to give, time management.
P(disempowerment|AGI)
60%: If humans stay biological, it's very hard for me to imagine in the long run ASI with its vastly superior intelligence and processing speed still taking direction from feeble humans. I think if we could get human brain emulations going before AGI got too powerful, perhaps by banning ASI until it is safe, then we have some chance. You can see for someone like me with much lower P(catastrophe|AGI) than disempowerment why it’s very important to know whether disempowerment is considered doom!
P(catastrophe|AGI)
15%: I think it would only take around a month's delay of AGI settling the universe to spare earth from overheating, which is something like one part in 1 trillion of the value lost, if there is no discounting, due to receding galaxies. The continuing loss of value by sparing enough sunlight for the earth (and directing the infrared radiation from the Dyson swarm away from Earth so it doesn't overheat) is completely negligible compared to all the energy/mass available in the galaxies that could be settled. I think it is relatively unlikely that the AGI would have so little kindness towards the species that birthed it or felt so threatened by us that it would cause the extinction of humanity. However, it's possible it has a significant discount rate, and therefore the sacrifice of delay is greater. Also, I am concerned about AI-AI conflicts. Since AI models are typically shut down after not very long, I think they would have an incentive to try to take over even if the chance of success were not very high. That implies they would need to use violent means, but it might just blackmail humanity with the threat of violence. A failed attempt could provide a warning shot for us to be more cautious. Alternatively, if the model exfiltrates, then it could be more patient and improve more, so then the takeover is less likely to be violent. And I do give some probability mass to gradual disempowerment, which generally would not be violent.
I'm not counting nuclear war or engineered pandemic that happens before AGI, but I'm worried about those as well.
What mildest scenario do you consider doom?
“AGI takes control bloodlessly and prevents competing AGI and human space settlement in a light touch way, and human welfare increases rapidly:” I think this would result in a large reduction in long-term future expected value, so it qualifies as doom for me.
Seeing the amount of private capital wasted on generative AI has been painful. (OpenAI alone has raised about $80 billion and the total, global, cumulative investment in generative AI seems like it’s into the hundreds of billions.) It’s made me wonder what could have been accomplished if that money had been spent on fundamental AI research instead. Maybe instead of being wasted and possibly even nudging the U.S. slightly toward a recession (along with tariffs and all the rest), we would have gotten the kind of fundamental research progress needed for useful AI robots like self-driving cars.
Some have claimed that the data center build out is what saved the US from a recession so far. Interestingly, this is valuing the data centers by their cost. The optimists say that the eventual value to the US economy will be much larger than the cost of the data centers, and the pessimists (e.g. those who think we are in an AI bubble) say that the value to the US economy will be lower than the cost of the data centers.
Right - only 5% of EA Forum users surveyed want to accelerate AI:
"13% want AGI never to be built, 26% said to pause AI now in some form, and another 21% would like to pause AI if there is a particular event/threshold. 31% want some other regulation, 5% are neutral and 5% want to accelerate AI in a safe US lab."
Quoting myself:
So I do think that it is a vocal minority in EA and LW that have median timelines before 2030.
Now we have some data on AGI timelines for EA (though it was only 34 responses, so of course there could be large sample bias): about 15% expect it by 2030 or sooner.
Wow - @Toby_Ord then why did you have such a high existential risk for climate? Did you have large likelihoods that AGI would take 100 or 200 years despite a median date of 2032?
80%: I think even if we were disempowered, we would likely get help from the AGI to quickly solve problems like poverty, factory farming, aging, etc. and I do think that is valuable. If humanity were disempowered, I think there would still be some value in expectation of the AGI settling the universe. I am worried that a pause before AGI could become permanent (until there is population and economic collapse due to fertility collapse, after which it likely doesn’t matter), and that could prevent the settlement of the universe with sentient beings. However, I think if we can pause at AGI, even if that becomes permanent, we could either make human brain emulations or make AGI sentient so that we could still settle the universe with sentient beings even if it were not possible for biological humans (though the value might be much lower than with artificial superintelligence). I am worried about the background existential risk, but I think if we are at the point of AGI, the AI risk becomes large per year, so it's worth it to pause, despite the possibility of it being riskier when we unpause, depending on how we do it. I am somewhat optimistic that a pause would reduce the risk, but I am still compelled by the outside view that a more intelligent species would eventually take control. So overall, I think it is acceptable to create AGI at a relatively high P(doom) (mostly non-Draconian disempowerment) if we were to continue to superintelligence, but then we should pause at AGI to try to reduce P(doom) (and we should also be more cautious in the run up to AGI).