[Epistemic status: fast written post of something I have basically always felt but never really put into words. It still is a half-baked idea and it is really simple, but I don't think I ever came across reading anything similar]

 

EA and adjacent movements puts a lot of attention into distinguish whether a risk is actually an x-risk or not. Of course, the difference in the outcomes between a fulfilled x-risk and a fulfilled not-x-risk are not of grade but of class: humanity (and its potential) exists or not.

 

I think the fact that the difference between x-risks and the other risks is of class, makes EAs spend way too much energy and time assessing whether a risk is actually existential or not. I don't think this is time well spend. In general, if somebody has to put a good deal of effort and research to assess this, it means that the risk is a hell of a risk, and the incentives to minimise it, most likely maximal. [I am referring to, basically, close calls here, but I can imagine myself expanding it to global catastrophic risks, for example.]

In practical terms, these differences seem to me almost irrelevant. Does it make any difference  in the actions anyone would possibly want to take to mitigate an extreme risk whether this risk is actually existential or not? For example: Does it make any difference whether a non-alligned superintelligent AGI will actively try to kill all humanity or not? If we are certain that it won't, we would still live in a world where we are the ants and it is humanity. Even if we think we eventually could climb out of our 'ant state' to a state with more potential for humanity, should we really put less effort in mitigating this risk than if we'd think the AGI will eliminate us? It would feel very odd to me to answer yes to this question. [edited to add the following:] The reality is that resources (all, from money, to energy, effort or time) are finite and not enough are devoted to mitigate very large risks in general. Until this changes, whether a risk is actually existential or not seems to me much less important than EA as movement think.

 

In another level, there is also the issue that such nitpicks generate a lot of debate and are often difficult to fully understand by the general public. More often that we would like, these debates contribute to the growing wave against EA, since from the outside this can look like some nerds having fun / wasting time and calling themselves "effective" for doing so.

 

[I wrote this post pretty fast and almost in one go. Please, tell me if there is something that is not understandable, improvable, or wrong and I will try to update accordingly]

8

0
1

Reactions

0
1
Comments8


Sorted by Click to highlight new comments since:

For example: Does it make any difference whether a non-alligned superintelligent AGI will actively try to kill all humanity or not? If we are certain that it won't, we would still live in a world where we are the ants and it is humanity.

 

This misunderstands what an existential risk is, at least as used by the philosophers who've written about this. Nick Bostrom, for example, notes that the extinction of humanity is not the only thing that counts as an extinction risk.  (The term "existential risk" is unfortunately a misnomer in this regard.) Something that drastically curtails the future potential of humanity would also count.

Even if we think we eventually could climb out of our 'ant state' to a state with more potential for humanity...

 

;-)

I'm not sure I understand your point then...

Surely a future in which humanity flourishes into the longterm future is a better one than a future where people are living as "ants." And if we have uncertainty about which path we're on and there are plausible reasons to think we're on the ant path, it can be worthwhile to figure that out so we can shift in a better direction.

Exactly. Even if the ant path may not be permanent, ie. if we could climb out of it. 

My point is that, in terms of the effort I would like humanity to devote to minimise this risk, I don't think it makes any difference whether the ant state is strictly permanent or we could eventually get out of it. Maybe if it were guaranteed to get out of it or even "only" very likely that we could get out of this ant state I could understand devoting less effort in mitigating this risk than if we'd think the AGI will eliminate us (or the ant state would be unescapable). 

If we agree on this, the fact that a risk is actually existential or not is in practice close to irrelevant.

Maybe a more realistic example would be helpful here. There have been recent reports claiming that, although it will negatively affect millions of people, climate change is unlikely to be an existential risk. Suppose that's true. Do you think EAs should devote as much time and effort preventing climate change-level risks as they do preventing existential risks?

Let's speak about humanity in general and not about EAs, cause where EA focus does not only depend on the degree of the risk.

Yes, I don't think humanity should currently devote less efforts to prevent such risks than x-risks. Probably the point is that we are doing way too less to tackle dangerous non-immediate risks in general, so it does not make any practical difference whether the risk is existential or only almost existential. And this point of view does not seem controversial at all, it is just not explicitly stated. It is not just not-EAs that are devoting a lot of effort to prevent climate change, an increasing fraction of EAs do as well.
 

I suppose I agree that humanity should generally focus more on catastrophic (non-existential) risks.

That said, I think this is often stated explicitly. For example, MacAskill in his recently book explicitly says that many of the actions we take to reduce x-risks will also look good even for people with shorter-term priorities.

Do you have any quote from someone who says we shouldn't care about catastrophic risks at all?

Do you have any quote from someone who says we shouldn't care about catastrophic risks at all?

I'm not saying this. And I really don't see how you came to think I do.

The only thing I say is that I don't see how anyone would argue that humanity should devote less effort to mitigate a given risk just because it turns out that it is not actually existential even though it may be more than catastrophic. Therefore, finding out if a risk is actually existential or not is not really valuable.

I'm not saying anything new here, I made this point several times above. Maybe it is not very clearly done, but I don't really know how to state it differently.

Curated and popular this week
 ·  · 5m read
 · 
[Cross-posted from my Substack here] If you spend time with people trying to change the world, you’ll come to an interesting conundrum: Various advocacy groups reference previous successful social movements as to why their chosen strategy is the most important one. Yet, these groups often follow wildly different strategies from each other to achieve social change. So, which one of them is right? The answer is all of them and none of them. This is because many people use research and historical movements to justify their pre-existing beliefs about how social change happens. Simply, you can find a case study to fit most plausible theories of how social change happens. For example, the groups might say: * Repeated nonviolent disruption is the key to social change, citing the Freedom Riders from the civil rights Movement or Act Up! from the gay rights movement. * Technological progress is what drives improvements in the human condition if you consider the development of the contraceptive pill funded by Katharine McCormick. * Organising and base-building is how change happens, as inspired by Ella Baker, the NAACP or Cesar Chavez from the United Workers Movement. * Insider advocacy is the real secret of social movements – look no further than how influential the Leadership Conference on Civil Rights was in passing the Civil Rights Acts of 1960 & 1964. * Democratic participation is the backbone of social change – just look at how Ireland lifted a ban on abortion via a Citizen’s Assembly. * And so on… To paint this picture, we can see this in action below: Source: Just Stop Oil which focuses on…civil resistance and disruption Source: The Civic Power Fund which focuses on… local organising What do we take away from all this? In my mind, a few key things: 1. Many different approaches have worked in changing the world so we should be humble and not assume we are doing The Most Important Thing 2. The case studies we focus on are likely confirmation bias, where
 ·  · 2m read
 · 
I speak to many entrepreneurial people trying to do a large amount of good by starting a nonprofit organisation. I think this is often an error for four main reasons. 1. Scalability 2. Capital counterfactuals 3. Standards 4. Learning potential 5. Earning to give potential These arguments are most applicable to starting high-growth organisations, such as startups.[1] Scalability There is a lot of capital available for startups, and established mechanisms exist to continue raising funds if the ROI appears high. It seems extremely difficult to operate a nonprofit with a budget of more than $30M per year (e.g., with approximately 150 people), but this is not particularly unusual for for-profit organisations. Capital Counterfactuals I generally believe that value-aligned funders are spending their money reasonably well, while for-profit investors are spending theirs extremely poorly (on altruistic grounds). If you can redirect that funding towards high-altruism value work, you could potentially create a much larger delta between your use of funding and the counterfactual of someone else receiving those funds. You also won’t be reliant on constantly convincing donors to give you money, once you’re generating revenue. Standards Nonprofits have significantly weaker feedback mechanisms compared to for-profits. They are often difficult to evaluate and lack a natural kill function. Few people are going to complain that you provided bad service when it didn’t cost them anything. Most nonprofits are not very ambitious, despite having large moral ambitions. It’s challenging to find talented people willing to accept a substantial pay cut to work with you. For-profits are considerably more likely to create something that people actually want. Learning Potential Most people should be trying to put themselves in a better position to do useful work later on. People often report learning a great deal from working at high-growth companies, building interesting connection
 ·  · 31m read
 · 
James Özden and Sam Glover at Social Change Lab wrote a literature review on protest outcomes[1] as part of a broader investigation[2] on protest effectiveness. The report covers multiple lines of evidence and addresses many relevant questions, but does not say much about the methodological quality of the research. So that's what I'm going to do today. I reviewed the evidence on protest outcomes, focusing only on the highest-quality research, to answer two questions: 1. Do protests work? 2. Are Social Change Lab's conclusions consistent with the highest-quality evidence? Here's what I found: Do protests work? Highly likely (credence: 90%) in certain contexts, although it's unclear how well the results generalize. [More] Are Social Change Lab's conclusions consistent with the highest-quality evidence? Yes—the report's core claims are well-supported, although it overstates the strength of some of the evidence. [More] Cross-posted from my website. Introduction This article serves two purposes: First, it analyzes the evidence on protest outcomes. Second, it critically reviews the Social Change Lab literature review. Social Change Lab is not the only group that has reviewed protest effectiveness. I was able to find four literature reviews: 1. Animal Charity Evaluators (2018), Protest Intervention Report. 2. Orazani et al. (2021), Social movement strategy (nonviolent vs. violent) and the garnering of third-party support: A meta-analysis. 3. Social Change Lab – Ozden & Glover (2022), Literature Review: Protest Outcomes. 4. Shuman et al. (2024), When Are Social Protests Effective? The Animal Charity Evaluators review did not include many studies, and did not cite any natural experiments (only one had been published as of 2018). Orazani et al. (2021)[3] is a nice meta-analysis—it finds that when you show people news articles about nonviolent protests, they are more likely to express support for the protesters' cause. But what people say in a lab setting mig