[Epistemic status: fast written post of something I have basically always felt but never really put into words. It still is a half-baked idea and it is really simple, but I don't think I ever came across reading anything similar]

 

EA and adjacent movements puts a lot of attention into distinguish whether a risk is actually an x-risk or not. Of course, the difference in the outcomes between a fulfilled x-risk and a fulfilled not-x-risk are not of grade but of class: humanity (and its potential) exists or not.

 

I think the fact that the difference between x-risks and the other risks is of class, makes EAs spend way too much energy and time assessing whether a risk is actually existential or not. I don't think this is time well spend. In general, if somebody has to put a good deal of effort and research to assess this, it means that the risk is a hell of a risk, and the incentives to minimise it, most likely maximal. [I am referring to, basically, close calls here, but I can imagine myself expanding it to global catastrophic risks, for example.]

In practical terms, these differences seem to me almost irrelevant. Does it make any difference  in the actions anyone would possibly want to take to mitigate an extreme risk whether this risk is actually existential or not? For example: Does it make any difference whether a non-alligned superintelligent AGI will actively try to kill all humanity or not? If we are certain that it won't, we would still live in a world where we are the ants and it is humanity. Even if we think we eventually could climb out of our 'ant state' to a state with more potential for humanity, should we really put less effort in mitigating this risk than if we'd think the AGI will eliminate us? It would feel very odd to me to answer yes to this question. [edited to add the following:] The reality is that resources (all, from money, to energy, effort or time) are finite and not enough are devoted to mitigate very large risks in general. Until this changes, whether a risk is actually existential or not seems to me much less important than EA as movement think.

 

In another level, there is also the issue that such nitpicks generate a lot of debate and are often difficult to fully understand by the general public. More often that we would like, these debates contribute to the growing wave against EA, since from the outside this can look like some nerds having fun / wasting time and calling themselves "effective" for doing so.

 

[I wrote this post pretty fast and almost in one go. Please, tell me if there is something that is not understandable, improvable, or wrong and I will try to update accordingly]

8

0
1

Reactions

0
1
Comments8


Sorted by Click to highlight new comments since:

For example: Does it make any difference whether a non-alligned superintelligent AGI will actively try to kill all humanity or not? If we are certain that it won't, we would still live in a world where we are the ants and it is humanity.

 

This misunderstands what an existential risk is, at least as used by the philosophers who've written about this. Nick Bostrom, for example, notes that the extinction of humanity is not the only thing that counts as an extinction risk.  (The term "existential risk" is unfortunately a misnomer in this regard.) Something that drastically curtails the future potential of humanity would also count.

Even if we think we eventually could climb out of our 'ant state' to a state with more potential for humanity...

 

;-)

I'm not sure I understand your point then...

Surely a future in which humanity flourishes into the longterm future is a better one than a future where people are living as "ants." And if we have uncertainty about which path we're on and there are plausible reasons to think we're on the ant path, it can be worthwhile to figure that out so we can shift in a better direction.

Exactly. Even if the ant path may not be permanent, ie. if we could climb out of it. 

My point is that, in terms of the effort I would like humanity to devote to minimise this risk, I don't think it makes any difference whether the ant state is strictly permanent or we could eventually get out of it. Maybe if it were guaranteed to get out of it or even "only" very likely that we could get out of this ant state I could understand devoting less effort in mitigating this risk than if we'd think the AGI will eliminate us (or the ant state would be unescapable). 

If we agree on this, the fact that a risk is actually existential or not is in practice close to irrelevant.

Maybe a more realistic example would be helpful here. There have been recent reports claiming that, although it will negatively affect millions of people, climate change is unlikely to be an existential risk. Suppose that's true. Do you think EAs should devote as much time and effort preventing climate change-level risks as they do preventing existential risks?

Let's speak about humanity in general and not about EAs, cause where EA focus does not only depend on the degree of the risk.

Yes, I don't think humanity should currently devote less efforts to prevent such risks than x-risks. Probably the point is that we are doing way too less to tackle dangerous non-immediate risks in general, so it does not make any practical difference whether the risk is existential or only almost existential. And this point of view does not seem controversial at all, it is just not explicitly stated. It is not just not-EAs that are devoting a lot of effort to prevent climate change, an increasing fraction of EAs do as well.
 

I suppose I agree that humanity should generally focus more on catastrophic (non-existential) risks.

That said, I think this is often stated explicitly. For example, MacAskill in his recently book explicitly says that many of the actions we take to reduce x-risks will also look good even for people with shorter-term priorities.

Do you have any quote from someone who says we shouldn't care about catastrophic risks at all?

Do you have any quote from someone who says we shouldn't care about catastrophic risks at all?

I'm not saying this. And I really don't see how you came to think I do.

The only thing I say is that I don't see how anyone would argue that humanity should devote less effort to mitigate a given risk just because it turns out that it is not actually existential even though it may be more than catastrophic. Therefore, finding out if a risk is actually existential or not is not really valuable.

I'm not saying anything new here, I made this point several times above. Maybe it is not very clearly done, but I don't really know how to state it differently.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin