Hide table of contents

 [memetic status: stating directly despite it being a clear consequence of core AI risk knowledge because many people have "but nature will survive us" antibodies to other classes of doom and misapply them here.]

Unfortunately, no.[1]

Technically, “Nature”, meaning the fundamental physical laws, will continue. However, people usually mean forests, oceans, fungi, bacteria, and generally biological life when they say “nature”, and those would not have much chance competing against a misaligned superintelligence for resources like sunlight and atoms, which are useful to both biological and artificial systems.

There’s a thought that comforts many people when they imagine humanity going extinct due to a nuclear catastrophe or runaway global warming: Once the mushroom clouds or CO2 levels have settled, nature will reclaim the cities. Maybe mankind in our hubris will have wounded Mother Earth and paid the price ourselves, but she’ll recover in time, and she has all the time in the world.

AI is different. It would not simply destroy human civilization with brute force, leaving the flows of energy and other life-sustaining resources open for nature to make a resurgence. Instead, AI would still exist after wiping humans out, and feed on the same resources nature needs, but much more capably.

You can draw strong parallels to the way humanity has captured huge parts of the biosphere for ourselves. Except, in the case of AI, we’re the slow-moving process which is unable to keep up.

A misaligned superintelligence would have many cognitive superpowers, which include developing advanced technology. For almost any objective it might have, it would require basic physical resources, like atoms to construct things which further its goals, and energy (such as that from sunlight) to power those things. These resources are also essential to current life forms, and, just as humans drove so many species extinct by hunting or outcompeting them, AI could do the same to all life, and to the planet itself.

Planets are not a particularly efficient use of atoms for most goals, and many goals which an AI may arrive at can demand an unbounded amount of resources. For each square meter of usable surface, there are millions of tons of magma and other materials locked up. Rearranging these into a more efficient configuration could look like strip mining the entire planet and firing the extracted materials into space using self-replicating factories, and then using those materials to build megastructures in space to harness a large fraction of the sun’s output. Looking further out, the sun and other stars are themselves huge piles of resources spilling unused energy out into space, and no law of physics renders them invulnerable to sufficiently advanced technology.

Some time after a misaligned, optimizing AI wipes out humanity, it is likely that there will be no Earth and no biological life, but only a rapidly expanding sphere of darkness eating through the Milky Way as the AI reaches and extinguishes or envelops nearby stars.

This is generally considered a less comforting thought.

This is an experiment in sharing highlighted content from aisafety.info. Browse around to view some of the other 300 articles which are live, or explore related questions!

  1. ^

     There are some scenarios where this might happen, especially in extreme cases of misuse rather than agentic misaligned systems, or in edge cases where a system is misaligned with respect to humanity but terminally values keeping nature around, but this is not the mainline way things go.

  2. ^
Comments10


Sorted by Click to highlight new comments since:

I think literal extinction is unlikely even conditional on misaligned AI takeover due to:

  • The potential for the AI to be at least a tiny bit "kind" (same as humans probably wouldn't kill all aliens).[1]
  • Decision theory/trade reasons

This is discussed in more detail here and here.

Insofar as humans and/or aliens care about nature, similar arguments apply there too, though this is mostly beside the point: if humans survive and have (even a tiny bit of) resources they can preserve some natural easily.

I find it annoying how confident this article is without really bother to engage with the relevant arguments here.

(Same goes for many other posts asserting that AIs will disassemble humans for their atoms.)

(This comment echos Owen's to some extent.)

  1. ^

    This includes the potential for the AI to have preferences that are morally valueable from a typical human perspective.

(cross posting my reply to your cross-posted comment)

I'm not arguing about p(total human extinction|superintelligence), but p(nature survives|total human extinction from superintelligence), as this conditional probability I see people getting very wrong sometimes.

It's not implausible to me that we survive due to decision theoretic reasons, this seems possible though not my default expectation (I mostly expect Decision theory does not imply we get nice things, unless we manually win a decent chunk more timelines than I expect).

My confidence is in the claim "if AI wipes out humans, it will wipe out nature". I don't engage with counterarguments to a separate claim, as that is beyond the scope of this post and I don't have much to add over existing literature like the other posts you linked.

I just wanted to say that the new aisafety.info website looks great! I have not looked at everything in detail, just clicking around a bit, but the article seem of good quality to me.

I will probably mainly recommend aisafety as an introductory resource.

Thanks! Feel free to leave comments or suggestions on the google docs which make up our backend.

I think this is a plausible consequence, but not a clear one.

Many people put significant value on conservation. It is plausible that some version of this would survive in an AI which was somewhat misaligned (especially since conservation might be a reasonably simple goal to point towards), such that it would spend some fraction of its resources towards preserving nature -- and one planet is a tiny fraction of the resources it could expect to end up with.

The most straightforward argument against this is that such an AI maybe wouldn't wipe out all humans. I tend to agree, and a good amount of my probability mass on "existential catastrophe from misaligned AI" does not involve human extinction. But I think there's some possible middle ground where an AI was not capable of reliably seizing power without driving humans extinct, but was capable if it allowed itself to do so, could wipe them out without eliminating nature (which would presumably pose much less threat to its ascendancy).

Whether AI would wipe out humans entirely is a separate question (and one which has been debated extensively, to the point where I don't think I have much to add to that conversation, even if I have opinions)

What I'm arguing for here is narrowly: Would AI which wipes out humans leave nature intact? I think the answer to that is pretty clearly no by default.

Yeah, I understood this. This is why I've focused on a particular case for it valuing nature which I think could be compatible with wiping out humans (not going into the other cases that Ryan discusses, which I think would be more likely to involve keeping humans around). I needed to bring in the point about humans surviving to address the counterargument "oh but in that case probably humans would survive too" (which I think is probable but not certain). Anyway maybe I was slightly overstating the point? Like I agree that in this scenario the most likely outcome is that nature doesn't meaningfully survive. But it sounded like you were arguing that it was obvious that nature wouldn't survive, which doesn't sound right to me.

I don't claim it's impossible that nature survives an AI apocalypse which kills off humanity, but I do think it's an extremely thin sliver of the outcome space (<0.1%). What odds would you assign to this?

Ok, I guess around 1%? But this is partially driven by model uncertainty; I don't actually feel confident your number is too small.

I'm much higher (tens of percentage points) on "chance nature survives conditional on most humans being wiped out"; it's just that most of these scenarios involve some small number of humans being kept around so it's not literal extinction. (And I think these scenarios are a good part of things people intuitively imagine and worry about when you talk about human extinction from AI, even though the label isn't literally applicable.)

Thanks for asking explicitly about the odds, I might not have noticed this distinction otherwise.

I thought about where the logic in the post seemed to be going wrong, and it led me to write this quick take on why most possible goals of AI systems are partially concerned with process and not just outcomes.

Curated and popular this week
 ·  · 19m read
 · 
I am no prophet, and here’s no great matter. — T.S. Eliot, “The Love Song of J. Alfred Prufrock”   This post is a personal account of a California legislative campaign I worked on March-June 2024, in my capacity as the indoor air quality program lead at 1Day Sooner. It’s very long—I included as many details as possible to illustrate a playbook of everything we tried, what the surprises and challenges were, and how someone might spend their time during a policy advocacy project.   History of SB 1308 Advocacy Effort SB 1308 was introduced in the California Senate by Senator Lena Gonzalez, the Senate (Floor) Majority Leader, and was sponsored by Regional Asthma Management and Prevention (RAMP). The bill was based on a report written by researchers at UC Davis and commissioned by the California Air Resources Board (CARB). The bill sought to ban the sale of ozone-emitting air cleaners in California, which would have included far-UV, an extremely promising tool for fighting pathogen transmission and reducing pandemic risk. Because California is such a large market and so influential for policy, and the far-UV industry is struggling, we were seriously concerned that the bill would crush the industry. A partner organization first notified us on March 21 about SB 1308 entering its comment period before it would be heard in the Senate Committee on Natural Resources, but said that their organization would not be able to be publicly involved. Very shortly after that, a researcher from Ushio America, a leading far-UV manufacturer, sent out a mass email to professors whose support he anticipated, requesting comments from them. I checked with my boss, Josh Morrison,[1] as to whether it was acceptable for 1Day Sooner to get involved if the partner organization was reluctant, and Josh gave me the go-ahead to submit a public comment to the committee. Aware that the letters alone might not do much, Josh reached out to a friend of his to ask about lobbyists with expertise in Cal
Andy Masley
 ·  · 4m read
 · 
If you’re visiting Washington DC to learn more about what’s happening in effective altruist policy spaces, we at EA DC want to make sure you get the most out of it! EA DC is one of the largest EA networks and we have a lot of amazing people to draw from for help. We have a lot of activity in each major EA cause area and in a broad range of policy careers, so there are a lot of great opportunities to connect and learn about each space! If you're not visiting DC soon but would still like to connect or learn more about the group you should email us at Info@EffectiveAltruismDC.org and explore our resource list!   How to get the most out of DC Fill out our visitor form Start by filling out our visitor form. We’ll get back to you soon with any resources and connections you requested! We’d be excited to chat over a video call before your visit, get you connected to useful resources, and put you in touch with specific people in DC most relevant to your cause area and career interests. Using the form, you can: Connect with the EA DC network If you fill out the visitor form we can connect you with specific people based on your interests and the reasons for your visit. After we connect you, you can either set up in-person meetings during your visit or have video calls ahead of time to get a sense of what's happening on the ground here before you arrive. To connect with more people you can find all our community resources here and on our website. Follow along with EA DC events here.  Get added to the EA DC Slack Even if you’re just in town for a few days, the Slack channel is a great way to follow what’s up in the network. If you’re okay sharing your name and reasons for your DC visit with the community you can post in the Introductions channel and put yourself out there for members to reach out to. Get hosted for your stay We have people in the network with rooms available to sublet, and sometimes options to stay for free. Find an office to work from during the
rai, NunoSempere
 ·  · 5m read
 · 
We’re developing an AI-enabled wargaming-tool, grim, to significantly scale up the number of catastrophic scenarios that concerned organizations can explore and to improve emergency response capabilities of, at least, Sentinel. Table of Contents 1. How AI Improves on the State of the Art 2. Implementation Details, Limitations, and Improvements 3. Learnings So Far 4. Get Involved! How AI Improves on the State of the Art In a wargame, a group dives deep into a specific scenario in the hopes of understanding the dynamics of a complex system and understanding weaknesses in responding to hazards in the system. Reality has a surprising amount of detail, so thinking abstractly about the general shapes of issues is insufficient. However, wargames are quite resource intensive to run precisely because they require detail and coordination. Eli Lifland shared with us some limitations about the exercises his team has run, like at The Curve conference: 1. It took about a month of total person-hours to iterate of iterating on the rules, printouts, etc. 2. They don’t have experts to play important roles like the Chinese government and occasionally don’t have experts to play technical roles or the US government. 3. Players forget about important possibilities or don’t know what actions would be reasonable. 4. There are a bunch of background variables which would be nice to keep track of more systematically, such as what the best publicly deployed AIs from each company are, how big private deployments are and for what purpose they are deployed, compute usage at each AGI project, etc. For simplicity, at the moment they only make a graph of best internal AI at each project (and rogue AIs if they exist). 5. It's effortful for them to vary things like the starting situation of the game, distribution of alignment outcomes, takeoff speeds, etc. AI can significantly improve on all the limitations above, such that more people can go through more scenarios faster at the same q