elifland

Interested in cause prioritization, improving wisdom/decision-making including via forecasting, and AI safety. More at https://www.elilifland.com/. You can give me anonymous feedback here.

Topic Contributions

Comments

How likely is World War III?

5 forecasters from Samotsvety Forecasting discussed the forecasts in this post.

 

First, I estimate that the chance of direct Great Power conflict this century is around 45%. 

Our aggregated forecast was 23.5%. Considerations discussed were the changed incentives in the nuclear era, possible causes (climate change, AI, etc.) and the likelihood of specific wars (e.g. US-China fighting over Taiwan).

 

Second, I think the chance of a huge war as bad or worse than WWII is on the order of 10%.


Our aggregated forecast was 25%, though we were unsure if this was supposed to only count wars between great powers, in which case it’s bounded above by the first forecast.


There was some discussion of the offense-defense balance as tech capabilities increase; perhaps offense will have more of an advantage over time.


Some forecasters would have preferred to predict based on something like human suffering per capita rather than battle deaths, due to an expected shift in how a 21st century great power war would be waged.

 


Third, I think the chance of an extinction-level war is about 1%. This is despite the fact that I put more credence in the hypothesis that war has become less likely in the post-WWII period than I do in the hypothesis that the risk of war has not changed.
 

Our aggregated forecast was 0.1% for extinction. Forecasters were skeptical of using Braumoeller’s model to estimate this as it seems likely to break down at the tails; killing everyone via a war seems really hard. There was some uncertainty of whether borderline cases such as war + another disaster to finish people off or war + future tech would count.


(Noticed just now that MaxRa commented giving a similar forecast with similar reasoning)

Impactful Forecasting Prize for forecast writeups on curated Metaculus questions

Hey, thanks for sharing these other options. I agree that one of these choices makes more sense than forecasting in many cases, and likely (90%) the majority. But I still think forecasting is a solid contender and plausibly (25%) the best in the plurality of cases. Some reasons:

  1. Which activity is best likely depends a lot on which is easiest to actually start doing, because I think the primary barrier to doing most of these usefully is "just" actually getting started and completing something. Forecasting may (40%)[1] be the most fun and least intimidating of these for many (33%+) prospective researchers because of the framing of competing on a leaderboard and the intrigue of trying to predict the future.
  2. I think the EA community has relatively good epistemics, but there is still room for improvement, and more researchers getting a forecasting background is one way to help with this (due to both epistemic training and identifying prospective researchers with good epistemics).
  3. Depending on the question, forecasting can look a lot like a bite-sized chunk of research, so I don't think it's mutually exclusive with some of the activities you listed and especially similar to summarizations/collections: for example, Ryan summarized relevant parts of papers then formed some semblance of an inside view in his winning entry.

Also, I was speaking from personal experience here;  e.g. Misha and I both have forecasted for a few years and enjoyed it while building skills and a track record, and are now doing ~generalist research or had the opportunity to and seriously considered it, respectively. 

  1. ^

    I think this will become especially true as the UX of forecasting platforms improves; let's say 55% this is true in 3 years from now, as I expect the UX here to improve more than the "UX" of other options like summarizing papers.

I feel anxious that there is all this money around. Let's talk about it

It also strikes against recent work on patient philanthropy, which is supported by Will MacAskill's argument that we are not living in the most influential time in human history.

 

Note that patient philanthropy includes investing in resources besides money that will allow us to do more good later; e.g. the linked article lists "global priorities research" and "Building a long-lasting and steadily growing movement" as promising opportunities from a patient longtermist view.

Looking at the Future Fund's Areas of Interest, at least 5 of the 10 strike me as promising under patient philanthropy: "Epistemic Institutions", "Values and Reflective Processes",  "Empowering Exceptional People", "Effective Altruism", and "Research That Can Help Us Improve"

Impactful Forecasting Prize Results and Reflections

At first I thought the scenarios were separate so they would be combined with an OR to get an overall probability, which then made me confused when you looked at only scenario 1 for determining your probability for technological feasibility.

I was also confused about why you assigned 30% to polygenic scores reaching 80% predictive power in Scenario 2 while assigning 80% to reaching saturation at 40% predictive power in the Scenario 1, because when I read 80% to reach saturation at 40% predictive power I read this as "capping out at around 40%" which would only leave a maximum of 20% for scenarios with much greater than 40%?

Finally, I was a little confused about where the likelihood of iterated embryo selection fit into your scenarios; this seems highly relevant/important and is maybe implicitly accounted for in e.g. "Must be able to generate 100 embryos to select from"? But could be good to make more explicit.

Impactful Forecasting Prize Results and Reflections

Thanks for sharing Ryan, and that makes sense in terms of another unintended consequence of our judging criteria; good to know for future contests.

Samotsvety Nuclear Risk Forecasts — March 2022

Great point. Perhaps we should have ideally reported the mean of this type of distribution, rather than our best guess percentages. I'm curious if you think I'm underconfident here?

Edit: Yeah I think I was underconfident, would now be at ~10% and ~0.5% for being 1 and 2 orders of magnitude too low respectively, based primarily on considerations Misha describes in another comment placing soft bounds on how much one should update from the base rate. So my estimate should still increase but not by as much (probably by about 2x, taking into account possibility of being wrong on other side as well).

Samotsvety Nuclear Risk Forecasts — March 2022

The estimate being too low by 1-2 orders of magnitude seems plausible to me independently (e.g. see the wide distribution in my Squiggle model [1]), but my confidence in the estimate is increased by it being the aggregated of several excellent forecasters, who were reasoning independently to some extent. Given that, my all-things-considered view is that 1 order of magnitude off[2] feels plausible but not likely (~25%?), and 2 orders of magnitude seems very unlikely (~5%?).

  1. ^

    EDIT: actually looking closer at my Squiggle model I think it should be more uncertain on the first variable, something like russiaNatoNuclearexchangeInNextMonth=.0001 to .01 rather than .0001 to .003

  2. ^

    Compared to a reference of what's possible given the information we have, with e.g. a group of 100 excellent forecasters would get if spending 1000 hours each.

Samotsvety Nuclear Risk Forecasts — March 2022

I agree the risk should be substantially higher than for an average month and I think most Samotsvety forecasters agree. I think a large part of the disagreement may be on how risky the average month is.

From the post:

(a) may be due to having a lower level of baseline risk before adjusting up based on the current situation. For example, while Luisa Rodríguez’s analysis puts the chance of a US/Russia nuclear exchange at .38%/year. We think this seems too high for the post-Cold War era after new de-escalation methods have been implemented and lessons have been learnt from close calls. Additionally, we trust the superforecaster aggregate the most out of the estimates aggregated in the post.

Speaking personally, I'd put the baseline risk per year at ~.1%/yr then have adjusted up by a factor of 10 to ~1%/yr given the current situation, which gives me ~.08%/month which is pretty close to the aggregate of ~.07%.

We also looked some from alternative perspectives e.g. decomposing Putin's decision making process, which gave estimates in the same ballpark.

The Future Fund’s Project Ideas Competition

Adversarial collaborations on important topics

Epistemic Institutions

There are many important topics, such as the level of risk from advanced artificial intelligence and how to reduce it, among which there are  reasonable people with very different views. We are interested in experimenting with various types of adversarial collaborations, which we define as people with opposing views working to clarify their disagreement and either resolve the disagreement or identify an experiment/observation that would resolve it. We are especially excited about combining adversarial collaborations with forecasting on any double cruxes identified from them. Some ideas for experimentation might be varying the number of participants, varying the level of moderation and strictness of enforced structure, and introducing AI-based aids.

Existing and past work relevant to this space include the Adversarial Collaboration Project, SlateStarCodex's adversarial collaboration contests, and the Late 2021 MIRI Conversations.

Load More