In my comment I later specified "in [the] next century" though it's quite understandable if you missed that. I agree that eventual extinction of Earth-originating intelligent life (including AIs) is likely; however, I don't currently see a plausible mechanism for this to occur over time horizons that are brief by cosmological standards.
(I just edited the original comment to make this slightly clearer.)
In my view, the extinction of all Earth-originating intelligent life (including AIs) seems extremely unlikely over the next several decades. While a longtermist utilitarian framework takes even a 0.01 percentage point reduction in extinction risk quite seriously, there appear to be very few plausible ways that all intelligent life originating from Earth could go extinct in the next century. Ensuring a positive transition to artificial life seems more useful on current margins.
That makes sense. For what it’s worth, I’m also not convinced that delaying AI is the right choice from a purely utilitarian perspective. I think there are reasonable arguments on both sides. My most recent post touches on this topic, so it might be worth reading for a better understanding of where I stand.
Right now, my stance is to withhold strong judgment on whether accelerating AI is harmful on net from a utilitarian point of view. It's not that I think a case can't be made: it's just I don’t think the existing arguments are decisive enough to justify a firm position. In contrast, the argument that accelerating AI benefits people who currently exist seems significantly more straightforward and compelling to me.
This combination of views leads me to see accelerating AI as a morally acceptable choice (as long as it's paired with adequate safety measures). Put simply:
Since I give substantial weight to both perspectives, the stronger and clearer case for acceleration (based on the interests of people alive today) outweighs the much weaker and more uncertain case for delay (based on speculative long-term utilitarian concerns) in my view.
Of course, my analysis here doesn’t apply to someone who gives almost no moral weight to the well-being of people alive today—someone who, for instance, would be fine with everyone dying horribly if it meant even a tiny increase in the probability of a better outcome for the galaxy a billion years from now. But in my view, this type of moral calculus, if taken very seriously, seems highly unstable and untethered from practical considerations.
Since I think we have very little reliable insight into what actions today will lead to a genuinely better world millions of years down the line, it seems wise to exercise caution and try to avoid overconfidence about whether delaying AI is good or bad on the basis of its very long-term effects.
I think it's extremely careless and condemnable to impose this risk on humanity just because you have personally deemed it acceptable.
I'm not sure I fully understand this criticism. From a moral subjectivist perspective, all moral decisions are ultimately based on what individuals personally deem acceptable. If you're suggesting that there is an objective moral standard—something external to individual preferences—that we are obligated to follow, then I would understand your point.
That said, I’m personally skeptical that such an objective morality exists. And even if it did, I don’t see why I should necessarily follow it if I could instead act according to my own moral preferences—especially if I find my own preferences to be more humane and sensible than the objective morality.
This would be a deontological nightmare. Who gave AI labs the right to risk the lives of 8 billion people?
I see why a deontologist might find accelerating AI troublesome, especially given their emphasis on act-omission asymmetry—the idea that actively causing harm is worse than merely allowing harm to happen. However, I don’t personally find that distinction very compelling, especially in this context.
I'm also not a deontologist: I approach these questions from a consequentialist perspective. My personal ethics can be described as a mix of personal attachments and broader utilitarian concerns. In other words, I both care about people who currently exist, and more generally about all morally relevant beings. So while I understand why this argument might resonate with others, it doesn’t carry much weight for me.
I think the benefits of AGI arriving sooner are substantial. Many of my family members, for example, could be spared from death or serious illness if advanced AI accelerates medical progress. However, if AGI is delayed for many years, they will likely die before such breakthroughs occur, leaving me to live without them.
I'm not making a strictly selfish argument here either, since this situation isn't unique to me—most people have loved ones in similar circumstances. Therefore, speeding up the benefits of AGI would have substantial ethical value from a perspective that values the lives of all humans who are alive today.
A moral point of view in which we give substantial weight to people who exist right now is indeed one of the most common ethical frameworks applied to policy. This may even be the most common mainstream ethical framework, as it's implicit in most economic and political analysis. So I don't think I'm proposing a crazy ethical theory here—just an unusual one within EA.
To clarify, I’m not arguing that AI should always be accelerated at any cost. Instead, I think we should carefully balance between pushing for faster progress and ensuring AI safety. If you either (1) believe that p(doom) is low, or (2) doubt that delaying AGI would meaningfully reduce p(doom), then it makes a lot of sense—under many common ethical perspectives—to view Anthropic as a force for good.
I'm admittedly unusual within the EA community on the issue of AI, but I'll just give my thoughts on why I don't think it's productive to shame people who work at AI companies advancing AI capabilities.
In my view, there are two competing ethical priorities that I think we should try to balance:
If you believe that AI safety (priority 1) is the only meaningful ethical concern and that accelerating AI progress (priority 2) has little or no value in comparison, then it makes sense why you might view AI companies like Anthropic as harmful. From that perspective, any effort to advance AI capabilities could be seen as inherently trading off against an inviolable goal.
However, if you think—as I do—that both priorities matter substantially, then what companies like Anthropic are doing seems quite positive. They are not simply pushing forward AI development; rather, they are working to advance AI while also trying to ensure that it is developed in a safe and responsible way.
This kind of balancing act isn’t unusual. In most industries, we typically don’t perceive safety and usefulness as inherently opposed to each other. Rather, we usually recognize that both technological progress and safe development are important objectives to push for.
Personally, I haven't spent that much time investing this question, but I currently believe it's very unlikely that the One Child Policy was primarily responsible for demographic collapse.
This may not have been the original intention behind the claim, but in my view, the primary signal I get from the One Child Policy is that the Chinese government has the appetite to regulate what is generally seen as a deeply personal matter—one's choice to have children. Even if the policy only had minor adverse effects on China's population trajectory, I find it alarming that the government felt it had the moral and legal authority to restrict people's freedom in this particular respect. This mirrors my attitudes toward those who advocate for strict anti-abortion policies, and those who advocate for coercive eugenics.
In general, there seems to be a fairly consistent pattern where the Chinese government has less respect for personal freedoms than the United States government. While there are certainly exceptions to this rule, the pattern was recently observed quite clearly during the pandemic, where China imposed what was among the most severe peacetime restrictions on the movement of ordinary citizens that we have observed in recent world history. It is broadly accurate to say that China effectively imprisoned tens of million of its own people without due process. And of course, China is known for restricting free speech and digital privacy to an extent that would be almost inconceivable in the United States.
Personal freedom is just one measure of the quality of governance, but I think it's quite an important one. While I think the United States is worse than China along some other important axes—for example, I think China has proven to be more cooperative internationally and less of a warmonger in recent decades—I consider the relative lack of respect for personal freedoms in China to be one of the best arguments for preferring United States to "win" any relevant technological arms race. This is partly because I find the possibility of a future world-wide permanent totalitarian regime to be an important source of x-risk, and in my view, China currently seems more likely than the United States to enact such a state.
That said, I still favor a broadly more cooperative approach toward China, seeking win-win compromises rather than aggressively “racing” them through unethical or dangerous means. The United States has its own share of major flaws, and the world is not a zero-sum game: China’s loss is not our gain.
I agree that the term "AI company" is technically more accurate. However, I also think the term "AI lab" is still useful terminology, as it distinguishes companies that train large foundation models from companies that work in other parts of the AI space, such as companies that primarily build tools, infrastructure, or applications on top of AI models.
I tentatively agree with your statement that,
That said, I still suspect the absolute probability of total extinction of intelligent life during the 21st century is very low. To be more precise, I'd put this probability at around 1% (to be clear: I recognize other people may not agree that this credence should count as "extremely low" or "very low" in this context). To justify this statement, I would highlight several key factors: