MB

Matthew_Barnett

4191 karmaJoined

Comments
413

I tentatively agree with your statement that,

To me, it seems much more likely that Earth-originating intelligence will go extinct this century than, say, in the 8973th century AD.

That said, I still suspect the absolute probability of total extinction of intelligent life during the 21st century is very low. To be more precise, I'd put this probability at around 1% (to be clear: I recognize other people may not agree that this credence should count as "extremely low" or "very low" in this context). To justify this statement, I would highlight several key factors:

  1. Throughout hundreds of millions of years, complex life has demonstrated remarkable resilience. Since the first vertebrates colonized land during the late Devonian period (approximately 375–360 million years ago), no extinction event has ever eradicated all species capable of complex cognition. Even after the most catastrophic mass extinctions, such as the end-Permian extinction and the K-Pg extinction, vertebrates rebounded. Not only did they recover, but they also surpassed their previous levels of ecological dominance and cognitive complexity, as seen in the increasing brain size and adaptability of various species over time.
  2. Unlike non-intelligent organisms, intelligent life—starting with humans—possesses advanced planning abilities and an exceptional capacity to adapt to changing environments. Humans have successfully settled in nearly every climate and terrestrial habitat on Earth, from tropical jungles to arid deserts and even Antarctica. This extreme adaptability suggests that intelligent life is less vulnerable to complete extinction compared to other complex life forms.
  3. As human civilization has advanced, our species has become increasingly robust against most types of extinction events rather than more fragile. Technological progress has expanded our ability to mitigate threats, whether they come from natural disasters or disease. Our massive global population further reduces the likelihood that any single event could exterminate every last human, while our growing capacity to detect and neutralize threats makes us better equipped to survive crises.
  4. History shows that even in cases of large-scale violence and genocide, the goal has almost always been the destruction of rival groups—not the annihilation of all life, including the perpetrators themselves. This suggests that intelligent beings have strong instrumental reasons to avoid total extinction events. Even in scenarios involving genocidal warfare, the likelihood of all intelligent beings willingly or accidentally destroying all life—including their own—seems very low.
  5. I have yet to see any compelling evidence that near-term or medium-term technological advancements will introduce a weapon or catastrophe capable of wiping out all forms of intelligent life. While near-term technological risks certainly exist that threaten human life, none currently appear to pose a credible risk of total extinction of intelligent life.
  6. Some of the most destructive long-term technologies—such as asteroid manipulation for planetary bombardment—are likely to develop alongside technologies that enhance our ability to survive and expand into space. As our capacity for destruction grows, so too will our ability to establish off-world colonies and secure alternative survival strategies. This suggests that the overall trajectory of intelligent life seems to be toward increasing resilience, not increasing vulnerability.
  7. Artificial life could rapidly evolve to become highly resilient to environmental shocks. Future AIs could be designed to be at least as robust as insects—able to survive in a wide range of extreme and unpredictable conditions. Similar to plant seeds, artificial hardware could be engineered to efficiently store and execute complex self-replicating instructions in a highly compact form, enabling them to autonomously colonize diverse environments by utilizing various energy sources, such as solar and thermal energy. Having been engineered rather than evolved naturally, these artificial systems could take advantage of design principles that surpass biological organisms in adaptability. By leveraging a vast array of energy sources and survival strategies, they could likely colonize some of the most extreme and inhospitable environments in our solar system—places that even the most resilient biological life forms on Earth could never inhabit.

In my comment I later specified "in [the] next century" though it's quite understandable if you missed that. I agree that eventual extinction of Earth-originating intelligent life (including AIs) is likely; however, I don't currently see a plausible mechanism for this to occur over time horizons that are brief by cosmological standards.

(I just edited the original comment to make this slightly clearer.)

Matthew_Barnett
*3
1
1
71% disagree

In my view, the extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely over the next several decades. While a longtermist utilitarian framework takes even a 0.01 percentage point reduction in extinction risk quite seriously, there appear to be very few plausible ways that all intelligent life originating from Earth could go extinct in the next century. Ensuring a positive transition to artificial life seems more useful on current margins.

That makes sense. For what it’s worth, I’m also not convinced that delaying AI is the right choice from a purely utilitarian perspective. I think there are reasonable arguments on both sides. My most recent post touches on this topic, so it might be worth reading for a better understanding of where I stand.

Right now, my stance is to withhold strong judgment on whether accelerating AI is harmful on net from a utilitarian point of view. It's not that I think a case can't be made: it's just I don’t think the existing arguments are decisive enough to justify a firm position. In contrast, the argument that accelerating AI benefits people who currently exist seems significantly more straightforward and compelling to me.

This combination of views leads me to see accelerating AI as a morally acceptable choice (as long as it's paired with adequate safety measures). Put simply:

  • When I consider the well-being of people who currently exist, the case for acceleration appears fairly strong and compelling.
  • When I take an impartial utilitarian perspective—one that prioritizes long-term outcomes for all sentient beings—the arguments for delaying AI seem weak and highly uncertain.

Since I give substantial weight to both perspectives, the stronger and clearer case for acceleration (based on the interests of people alive today) outweighs the much weaker and more uncertain case for delay (based on speculative long-term utilitarian concerns) in my view.

Of course, my analysis here doesn’t apply to someone who gives almost no moral weight to the well-being of people alive today—someone who, for instance, would be fine with everyone dying horribly if it meant even a tiny increase in the probability of a better outcome for the galaxy a billion years from now. But in my view, this type of moral calculus, if taken very seriously, seems highly unstable and untethered from practical considerations. 

Since I think we have very little reliable insight into what actions today will lead to a genuinely better world millions of years down the line, it seems wise to exercise caution and try to avoid overconfidence about whether delaying AI is good or bad on the basis of its very long-term effects.

I think it's extremely careless and condemnable to impose this risk on humanity just because you have personally deemed it acceptable.

I'm not sure I fully understand this criticism. From a moral subjectivist perspective, all moral decisions are ultimately based on what individuals personally deem acceptable. If you're suggesting that there is an objective moral standard—something external to individual preferences—that we are obligated to follow, then I would understand your point. 

That said, I’m personally skeptical that such an objective morality exists. And even if it did, I don’t see why I should necessarily follow it if I could instead act according to my own moral preferences—especially if I find my own preferences to be more humane and sensible than the objective morality.

This would be a deontological nightmare. Who gave AI labs the right to risk the lives of 8 billion people?

I see why a deontologist might find accelerating AI troublesome, especially given their emphasis on act-omission asymmetry—the idea that actively causing harm is worse than merely allowing harm to happen. However, I don’t personally find that distinction very compelling, especially in this context. 

I'm also not a deontologist: I approach these questions from a consequentialist perspective. My personal ethics can be described as a mix of personal attachments and broader utilitarian concerns. In other words, I both care about people who currently exist, and more generally about all morally relevant beings. So while I understand why this argument might resonate with others, it doesn’t carry much weight for me.

I think the benefits of AGI arriving sooner are substantial. Many of my family members, for example, could be spared from death or serious illness if advanced AI accelerates medical progress. However, if AGI is delayed for many years, they will likely die before such breakthroughs occur, leaving me to live without them. 

I'm not making a strictly selfish argument here either, since this situation isn't unique to me—most people have loved ones in similar circumstances. Therefore, speeding up the benefits of AGI would have substantial ethical value from a perspective that values the lives of all humans who are alive today.

A moral point of view in which we give substantial weight to people who exist right now is indeed one of the most common ethical frameworks applied to policy. This may even be the most common mainstream ethical framework, as it's implicit in most economic and political analysis. So I don't think I'm proposing a crazy ethical theory here—just an unusual one within EA.

To clarify, I’m not arguing that AI should always be accelerated at any cost. Instead, I think we should carefully balance between pushing for faster progress and ensuring AI safety. If you either (1) believe that p(doom) is low, or (2) doubt that delaying AGI would meaningfully reduce p(doom), then it makes a lot of sense—under many common ethical perspectives—to view Anthropic as a force for good.

I'm admittedly unusual within the EA community on the issue of AI, but I'll just give my thoughts on why I don't think it's productive to shame people who work at AI companies advancing AI capabilities. 

In my view, there are two competing ethical priorities that I think we should try to balance:

  1. Making sure that AI is developed safely and responsibly, so that AIs don't harm humans in the future.
  2. Making sure that AI is developed quickly, in order to take advantage of the enormous economic and technological benefits of AI sooner in time. This would, among other things, enable us to save lives by hastening AI-assisted medical progress.

If you believe that AI safety (priority 1) is the only meaningful ethical concern and that accelerating AI progress (priority 2) has little or no value in comparison, then it makes sense why you might view AI companies like Anthropic as harmful. From that perspective, any effort to advance AI capabilities could be seen as inherently trading off against an inviolable goal.

However, if you think—as I do—that both priorities matter substantially, then what companies like Anthropic are doing seems quite positive. They are not simply pushing forward AI development; rather, they are working to advance AI while also trying to ensure that it is developed in a safe and responsible way.

This kind of balancing act isn’t unusual. In most industries, we typically don’t perceive safety and usefulness as inherently opposed to each other. Rather, we usually recognize that both technological progress and safe development are important objectives to push for.

Personally, I haven't spent that much time investing this question, but I currently believe it's very unlikely that the One Child Policy was primarily responsible for demographic collapse. 

This may not have been the original intention behind the claim, but in my view, the primary signal I get from the One Child Policy is that the Chinese government has the appetite to regulate what is generally seen as a deeply personal matter—one's choice to have children. Even if the policy only had minor adverse effects on China's population trajectory, I find it alarming that the government felt it had the moral and legal authority to restrict people's freedom in this particular respect. This mirrors my attitudes toward those who advocate for strict anti-abortion policies, and those who advocate for coercive eugenics.

In general, there seems to be a fairly consistent pattern where the Chinese government has less respect for personal freedoms than the United States government. While there are certainly exceptions to this rule, the pattern was recently observed quite clearly during the pandemic, where China imposed what was among the most severe peacetime restrictions on the movement of ordinary citizens that we have observed in recent world history. It is broadly accurate to say that China effectively imprisoned tens of million of its own people without due process. And of course, China is known for restricting free speech and digital privacy to an extent that would be almost inconceivable in the United States.

Personal freedom is just one measure of the quality of governance, but I think it's quite an important one. While I think the United States is worse than China along some other important axes—for example, I think China has proven to be more cooperative internationally and less of a warmonger in recent decades—I consider the relative lack of respect for personal freedoms in China to be one of the best arguments for preferring United States to "win" any relevant technological arms race. This is partly because I find the possibility of a future world-wide permanent totalitarian regime to be an important source of x-risk, and in my view, China currently seems more likely than the United States to enact such a state.

That said, I still favor a broadly more cooperative approach toward China, seeking win-win compromises rather than aggressively “racing” them through unethical or dangerous means. The United States has its own share of major flaws, and the world is not a zero-sum game: China’s loss is not our gain.

Unfortunately there's momentum behind the term "AI lab" in a way that is not true for "AI bananas". Also, it is unambiguously true that a major part of what these companies do is scientific experimentation, as one would expect in a laboratory—this makes the analogy to "AI bananas" imperfect.

I agree that the term "AI company" is technically more accurate. However, I also think the term "AI lab" is still useful terminology, as it distinguishes companies that train large foundation models from companies that work in other parts of the AI space, such as companies that primarily build tools, infrastructure, or applications on top of AI models.

Load more