Note: as a result of the discussions I’ve had here in the comment section and elsewhere, my views have changed since I made this post. I no longer think permanently stalling technological progress is a realistic option, and am questioning whether a long-term AI development pause is even feasible. -(-H.F., Jan 15., 2024)
———
By this, I mean a world in which:
- Humans remain the dominant intelligent, technological species on Earth's landmasses for a long period of time (> ~10,000 years).
- AGI is never developed, or it gets banned / limited in the interests of human safety. AI never has much social or economic impact.
- Narrow AI never advances much beyond where it is today, or it becomes banned / limited in the interests of human safety.
- Mind uploading is impossible or never pursued.
- Life extension (beyond modest gains due to modern medicine) isn't possible, or is never pursued.
- Any form of transhumanist initiatives are impossible or never pursued.
- No contact is made with alien species or extraterrestrial AIs, no greater-than-human intelligences are discovered anywhere in the universe.
- Every human grows, peaks, ages, and passes away within ~100 years of their birth, and this continues for the remainder of the human species' lifetime.
Most other EAs I've talked to have indicated that this sort of future is suboptimal, undesirable, or best avoided, and this seems to be a widespread position among AI researchers as well (1). Even MIRI founder Eliezer Yudkowsky, perhaps the most well-known AI abolitionist outside of EA circles, wouldn't go as far as to say that AGI should never be developed, and that transhumanist projects should never be pursued (2). And he isn't alone -- there are many, many researchers both within and outside of the EA community with similar views on P(extinction) and P(societal collapse), and they still wouldn't accept the idea that the human condition should never be altered via technological means.
My question is why can't we just accept the human condition as it existed before smarter-than-human AI (and fundamental alterations to our nature) were considered to be more than pure fantasy? After all, the best way to stop a hostile, unaligned AI is to never invent it in the first place. The best way to avoid the destruction of future value by smarter-than-human artificial intelligence is to avoid obsession with present utility and convenience.
So why aren't more EA-aligned organizations and initiatives (other than MIRI) presenting global, strictly enforced bans on advanced AI training as a solution to AI-generated x-risk? Why isn't there more discussion of acceptance (of the traditional human condition) as an antidote to the risks of AGI, rather than relying solely on alignment research and safety practices to provide a safe path forward for AI (I'm not convinced such a path exists)?
Let's leave out the considerations of whether AI development can be practically stopped at this stage, and just focus more on the philosophical issues here.
References:
- Katya_Grace (EA Forum Poster) (2024, January 5). Survey of 2,778 AI authors: six parts in pictures.
- Yudkowsky, E. S. (2023, March 29). The only way to deal with the threat from AI? Shut it down. Time. https://time.com/6266923/ai-eliezer-yudkowsky-open-letter-not-enough/
Thank you for writing this up Hayven! I think there are multiple reasons as to why it will be very difficult for humans to settle for less. Primarily, I suspect this to be the case because a large part of our human nature is to strive for maximizing resources, and wanting to consistently improve the conditions of life. There are clear evolutionary advantages to have this ingrained into a species. This tendency to want to have more got us out of picking berries and hunting mammoths to living in houses with heating, being able to connect with our loved ones via video calls and benefiting from better healthcare. In other words, I don't think that the human condition was different in 2010, it was pretty much exactly the same as it is now, just as it was 20 000 years ago. "Bigger, better, faster."
The combination out of this human tendency, combined with our short-sightedness is a perfect recipe for human extinction. If we want to overcome the Great Filter, I think the only realistic way we will accomplish this is by figuring out how we can combine this desire for more with more wisdom and better coordination. It seems to be that we are far from that point, unfortunately.
A key takeaway for me is the increased likelihood of success with interventions that guide, rather than restrict, human consumption and development. These strategies seem more feasible as they align with, rather than oppose, human tendencies towards growth and improvement. That does not mean that they should be favoured though, only that they will be more likely to succeed. I would be glad to get pushback here.
I can highly recommend the book The Molecule of More to read more about this perspective (especially Chapter 6).
I hope you are okay with the storm! Good luck there. And indeed, figuring out how to work with ones evolutionary tendencies is not always straightforward. For many personal decisions this is easier, such as recognising that sitting 10 hours a day at the desk is not what our bodies have evolved for. "So let's go for a run!" If it comes to large scale coordination, however, things get trickier...
"I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility." -> I agree with this and your following points.