"Gotta go fast." ―Sonic[1]
A few prominent transhumanists have argued that building friendly AI as quickly as we can may be our best chance to prevent a "grey goo" catastrophe, in which self-replicating nanobots kill everyone on Earth. In 1999, Eliezer Yudkowsky forecasted that a nanotech catastrophe would occur sometime between 2003 and 2015. He forecasted a 70%+ chance of human extinction from nanotechnology, and advocated rushing to build friendly AI (in current terminology, aligned AGI or safe AGI) in order to prevent the end of all human life.[2] At this time, Yudkowsky had begun work on his idea for building friendly AI, which he called Elisson. Yudkowsky wrote:
If we don't get some kind of transhuman intelligence around *real soon*, we're dead meat. Remember, from an altruistic perspective, I don't care whether the Singularity is now or in ten thousand years - the reason I'm in a rush has nothing whatsoever to do with the meaning of life. I'm sure that humanity will create a Singularity of one kind or another, if it survives. But the longer it takes to get to the Singularity, the higher the chance of humanity wiping itself out.
My current estimate, as of right now, is that humanity has no more than a 30% chance of making it, probably less. The most realistic estimate for a seed AI transcendence is 2020; nanowar, before 2015. The most optimistic estimate for project Elisson would be 2006; the earliest nanowar, 2003.
So we have a chance, but do you see why I'm not being picky about what kind of Singularity I'll accept?
Fortunately, in 2000, Yudkowsky forecasted[3] that he and his colleagues at the Singularity Institute for Artificial Intelligence or SIAI (now the Machine Intelligence Research Institute or MIRI) would create friendly AI (again, aligned or safe AGI) in somewhere between five to twenty years, and probably in around eight to ten years:
The Singularity Institute seriously intends to build a true general intelligence, possessed of all the key subsystems of human intelligence, plus design features unique to AI. We do not hold that all the complex features of the human mind are "emergent", or that intelligence is the result of some simple architectural principle, or that general intelligence will appear if we simply add enough data or computing power. We are willing to do the work required to duplicate the massive complexity of human intelligence; to explore the functionality and behavior of each system and subsystem until we have a complete blueprint for a mind. For more about our Artificial Intelligence plans, see the document Coding a Transhuman AI.
Our specific cognitive architecture and development plan forms our basis for answering questions such as "Will transhumans be friendly to humanity?" and "When will the Singularity occur?" At the Singularity Institute, we believe that the answer to the first question is "Yes" with respect to our proposed AI design - if we didn't believe that, the Singularity Institute would not exist. Our best guess for the timescale is that our final-stage AI will reach transhumanity sometime between 2005 and 2020, probably around 2008 or 2010. As always with basic research, this is only a guess, and heavily contingent on funding levels.
Nick Bostrom and Ray Kurzweil have made similar arguments about friendly AI as a defense against "grey goo" nanobots.[4][5] A fictionalized version of this scenario plays out in Kurzweil’s 2010 film The Singularity is Near. Ben Goertzel, who helped popularize the term "artificial general intelligence", has also made an argument along these lines.[6] Goertzel proposes an "AGI Nanny" that could steward humanity through the development of dangerous technologies like nanotechnology.
In 2002, Bostrom wrote:
Some technologies seem to be especially worth promoting because they can help in reducing a broad range of threats. Superintelligence is one of these. Although it has its own dangers (expounded in preceding sections), these are dangers that we will have to face at some point no matter what. But getting superintelligence early is desirable because it would help diminish other risks. A superintelligence could advise us on policy. Superintelligence would make the progress curve for nanotechnology much steeper, thus shortening the period of vulnerability between the development of dangerous nanoreplicators and the deployment of adequate defenses. By contrast, getting nanotechnology before superintelligence would do little to diminish the risks of superintelligence.
The argument that we need to rush the development of friendly AI to save the world from dangerous nanotech may seem far-fetched and certainly some of the details are wrong. Yet consider the precautionary principle, expected value, and the long-term future. If the chance of preventing human extinction and saving 10^52 future lives[7] is even one in ten duodecillion[8] or 1 in 10^40 (in other words, a probability of 10^-40), then the expected value is equivalent to saving 1 trillion lives in the present. GiveWell’s estimate for the cost to save a life is $3,000.[9] So, we should be willing to allocate $3 quadrillion toward rushing to build friendly AI before nanobots wipe out humanity. In fact, since there is some small probability — it doesn’t matter how small — that the number of potential future lives is infinite,[10] we should be willing to spend an infinite amount of money on building friendly AI as quickly as possible.
I believe there’s at least a 1 in 10^48 chance and certainly at least a 1 in ∞ chance that I can create friendly AI in about ten years, or twenty years, tops, on a budget of $1 million a year. I encourage funders to reach out for my direct deposit info. If necessary, I can create a 501(c)(3), but I estimate the expected disvalue of the inconvenience to be the equivalent of between a hundred and infinity human deaths.[11]
- ^
The hedgehog.
- ^
Extropians: Re: Yudkowsky’s AI (Again). https://diyhpl.us/~bryan/irc/extropians/www.lucifer.com/exi-lists/extropians.1Q99/3561.html.
- ^
“Introduction to the Singularity.” The Singularity Institute for Artificial Intelligence, 17 Oct. 2000, https://web.archive.org/web/20001017124429/http://www.singinst.org/intro.html.
- ^
Bostrom, N. “Existential Risks: Analyzing Human Extinction Scenarios and Related Hazards.” Journal of Evolution and Technology, Publisher's version, vol. 9, Institute for Ethics and Emerging Technologies, 2002. https://nickbostrom.com/existential/risks
- ^
Kurzweil, Ray. “Nanotechnology Dangers and Defenses.” Nanotechnology Perceptions, vol. 2, no. 1a, Mar. 2006, https://nano-ntp.com/index.php/nano/article/view/270/179.
- ^
Goertzel, Ben. “Superintelligence: Fears, Promises and Potentials: Reflections on Bostrom’s Superintelligence, Yudkowsky’s From AI to Zombies, and Weaver and Veitas’s ‘Open-Ended Intelligence.’” Journal of Ethics and Emerging Technologies, vol. 25, no. 2, Dec. 2015, pp. 55–87. DOI.org (Crossref), https://jeet.ieet.org/index.php/home/article/view/48/48.
- ^
Bostrom, Nick. “Existential Risk Prevention as Global Priority.” Global Policy, vol. 4, no. 1, Feb. 2013, pp. 15–31. DOI.org (Crossref), https://existential-risk.com/concept.pdf.
- ^
1 in 10,000,000,000,000,000,000,000,000,000,000,000,000,000.
- ^
How Much Does It Cost to Save a Life? | GiveWell. https://www.givewell.org/how-much-does-it-cost-to-save-a-life.
- ^
Bulldog, Bentham’s. “Philanthropy With Infinite Stakes.” Substack newsletter. Bentham’s Newsletter, 19 Nov. 2025, https://benthams.substack.com/p/philanthropy-with-infinite-stakes.
- ^
I also prefer to receive payment in Monero or via deposit to a Swiss bank account.

Executive summary: The author reviews early transhumanist arguments that rushing to build friendly AI could prevent nanotech “grey goo” extinction, and concludes—largely by reductio—that expected value reasoning combined with speculative probabilities can be used to justify arbitrarily extreme funding demands without reliable grounding.
Key points:
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
I thought Altman and the Amodeis had already altruistically devoted their lives to saving us from grey goo. Since they're going to do this before 2027 you may already be too late
Peter Thiel wants to know if your AI can be unfriendly enough to make a weapon out of it.
Oh man, I remember the days when Eliezer still called it Friendly and Unfriendly AI. I actually used one of those terms in a question when I was at a Q&A after a tutorial by the then less famous Yoshua Bengio at the 27th Canadian Conference on AI in 2014. He jokingly replied by asking if I was a journalist, before giving a more serious answer saying we were so far away from having to worry about that kind of thing (AI models back then were much more primitive, it was hard to imagine an object recognizer being dangerous). Fun times.
I appreciate the irony and see the value in this, but I'm afraid that you're going to be downvoted into oblivion because of your last paragraph.
Just as long as I get my funding... The lightcone depends on it...