Gideon Futerman

8Joined Dec 2021



Working on the interaction between Solar Radiation Modification (Solar geoengineering) and X-Risk, GCRs, Civilisational Collapse risks and Negative State risks. Fellow at CERI. Lead researcher on the RESILIENCER Project (

How I can help others

Reach out to me if you have questions about SRM/Solar geoengineering


Longtermists Should Work on AI - There is No "AI Neutral" Scenario

I would be interested in your uncertainties with all of this. If we are basing our ITN analysis on priors, given the limitations and biases of our priors, I would again be highly uncertain, once more leaning away from the certainty that you present in this post

Longtermists Should Work on AI - There is No "AI Neutral" Scenario

I actually think our big crux here is the amount of uncertainty. Each of the points I raise and each new assumption you are putting in should raise you uncertainty. Given you claim 95% ofongtermists should work on AI, high uncertainties fo not seem to weigh in the favour of your argument. Note I am not saying and haven't that either AI isn't the most important X-Risk or that we shouldn't work on it. Just arguing against the certainty from your post

Longtermists Should Work on AI - There is No "AI Neutral" Scenario

I have a slight problem with the "tell me a story" framing. Scenarios are useful, but also lend themselves general to crude rather than complex risks. In asking this question, you implicitly downplay complex risks. For a more thorough discussion, the "Democratising Risk" paper by Cramer and Kemp has some useful ideas in it (I disagree with parts of the paper but still) It also continues to priorities epistemically neat and "sexy" risks which whilst possibly the most worrying are not exclusive. Also probabilities on scenarios in many contexts can be somewhat problematic, and the methodologies used to come up with very high xrisk values for AGI vs other xrisks have very high uncertainties. To this degree, I think the certainty you have is somewhat problematic

Longtermists Should Work on AI - There is No "AI Neutral" Scenario

The problem is for the strength of the claims made here, that longtermists should work on AI above all else (like 95% of longtermists should be working on this), you need a tremendous amount of certainty that each of these assumptions hold. As your uncertainty grows, the strength of the argument made here reduces

Longtermists Should Work on AI - There is No "AI Neutral" Scenario

"I think this makes sense when we're in the domain of non-existential areas. I think that in practice when you're confident on existential outcomes and don't know how to solve them yet, you probably should still focus on it though" -I think this somewhat misinterprets what I said. This is only the case if you are CERTAIN that biorisk, climate, nuclear etc aren't X-Risks. Otherwise it matters. If (toy numbers here) AI risk is 2 orders of magnitude more likely to occur than biorisk, but four orders of magnitude less tractable, then it doesn't seem that AI risk is the thing to work on.

"Not sure what you mean by "this isn't true (definitionnally". Do you mean irrecoverable collapse, or do you mean for animals? " -Sorry, I worded this badly. What I meant is that argument assumes that X-Risk and human extinction are identical. They are of course not, as irrecoverable collapse , s-risks and permanent curtailing of human potential (which I think is a somewhat problematic concept) are all X-Risks as well. Apologies for the lack of clarity.

"The posts I linked to were meant to have that purpose." -I think my problem is that I don't think the articles necessarily do a great job at evidencing the claims they make. Take the 80K one. It seems to ignore the concept of vulnerabilities and exposures, instead just going for a hazard centric approach. Secondly, it ignores a lot of important stuff that goes on in the climate discussion, for example what is discussed in this ( and this ( Basically, I think it fails to adequately address systemic risk, cascading risk and latent risk. Also, it seems to (mostly) equate X-Risk to human extinction without massively exploring the question of if civilisation collapses whether we WILL recover not just whether we could. The Luisa Rodriguez piece also doesn't do this (this isn't a critique of her piece, as far as I can tell it didn't intend to do this either).

An intuition for why it's hard to kill everyone till only 1000 persons survive:  - For humanity to die, you need an agent: Humans are very adaptive in general+ you might expect that at least the richest people of this planet have plans and will try to survive at all costs.  So for instance, even if viruses infect 100% of the people (almost impossible if people are aware that there are viruss) and literally  kill 99% of the people (again ; almost impossible), you still have 70 million people alive. And no agent on earth has ever killed 70 million people. So even if you had a malevolent state that wanted to do that (very unlikely), they would have a hard time doing that till there are below 1000 people left. Same goes for nuclear power. It's not too hard to kill 90% of people with a nuclear winter but it's very hard to kill the remaining 10-1-0.1% etc. -Again, this comes back to the idea that for something to be an X-Risk it needs to, in one single event, wipe out humanity or most of it. But X-risks may be a collapse we don't recover from. Note this isn't the same as a collapse we can't recover from, but merely because "progress" (itself a very problematic term) seems highly contingent, even if we COULD recover doesn’t mean there isn’t a very high probability that we WILL. Moreover, if we retain this loss of complexity for a long time, ethical drift (making srisks far more likely even given recovery) is more likely. As is other catastrophes wiping us out, even if recoverable from alone, either in concert, by cascades or by discontinuous local catastrophes. It seems like it needs a lot more justification to have a very high probability that a civilisation we think is valuable would recover from a collapse that even leaves 100s of millions of people alive. This discussion over how likely a collapse or GCR would be converted into an X-Risk is still very open for debate, as is the discussion of contingency vs convergence. But for your position to hold, you need very high certainty on this point, which I think is highly debatable and perhaps at this point premature and unjustified. Sorry I can't link the papers I need to right now, as I am on my phone, but will link later.

Longtermists Should Work on AI - There is No "AI Neutral" Scenario

I think this makes far too strong a claim for the evidence you provide. Firstly, under the standard ITN (Importance Tractability Neglectedness) framework, you only focus on importance. If there are orders of magnitude differences in, let's say, traceability (seems most important here), then longtermists maybe shouldn't work on AI. Secondly, your claims that there is a low possibility AGI isn't possible seem to need to be fleshed out more. The term AGI and general intelligence is notoriously slippery, and many argue we simply don't understand intelligence enough to actually clarify the concept of general intelligence. If we think we don't understand what general intelligence is, one may suggest that it is intractable enough for present actors that no matter how important or unimportant AGI is, under an ITN framework its not the most important thing. On the other hand, I am not clear this claim about AGI is necessary; TAI (transformative AI) is clearly possible and potentially very disruptive without the AI being generally intelligent. Thirdly, your section on other X-Risks takes an overly single hazard approach to X-Risk, which probably leads to an overly narrow interprets of what might pose X-Risk. I also think the dismissal of climate change and nuclear war seems to imply that human extinction=X-Risk. This isn't true (definitionally), although you may make an argument nuclear war and climate change aren't X-Risks, that argument is not made here. I can clarify or provide evidence for these points if you think it would be useful, but I think the claims you make about AI vs other priorities is too strong for the evidence you provide. I am not hear claiming you are wrong, but rather you need stronger evidence to support your conclusions

Who's hiring? (May-September 2022)

The RESILIENCER Project (Ramifications of Experimentation into SRM In Light of its Impacts on Existential, Negative-state and Civilisational Endangering Risks) are looking to hire some researchers. Currently the team is exceptionally small (me plus a few others advising and mentoring), but we are looking to expand. 

The project is exploring the interaction of solar radiation modification and existential risk, GCRs, societal collapse and negative state risks, with an aim to be as rigorous and thorough in the exploration of the field as possible. Thus, we need to expand to people with appropraite expertise.

In particular, we are looking for people with expertise in at least one of the following areas:

  • SRM
  • Social sciences data analysis
  • STS, in particular the relationship between research and technology deployment and the role of imaginings in the eventual usage of technology in society
  • Collapseology, GCRs and X-Risk
  • Climate Modelling

As the project goes on, our list of needed areas may expand. Moreover, if you have expertise in other areas you think might be relevant, please do apply as well.

These roles will be remote, short term and part time. Rate will be negotiable as the researcher positions will be tailormade for the candidates (apologies we can't advertise the rates upfront, but what exactly the roles will even entail will only be developed once the right candidate(s) have been found).

The application form is pretty short, and if we think you look promising, we will set up a call to discuss what you can offer!

The form to apply is on our website

The Threat of Climate Change Is Exaggerated

There are various problems with this. 

FIrstly, generally roughly 3C is considered the likely warming if policies continue as they are,not the 2C that you claim. If the world achieves decarbonisation leading to 550PPM (in line with current policies, although 3C rather than the 2C you claim), there is still a fat tail risk, and in fact there is about 10% probability of 6C warming, due to our remaining uncertainty of ECS. This doesn't meaningfully account for tipping points either, which if we got such warming we would be very likely to hit. If you want to read more on this, either read Wagner & Weitzmann 2015 (its a little old but still very relevant) or just read some of the literature on fat tailed climate risks.  10% chance of above 6C in a very plausible scenario seems an unacceptably high risk. This doesn't even  account for the possibility(although small, nonetheless very far from non-negligable) that we end up following an RCP8.5 pathway, which would be considerably more devastating.

Even if we do end up reaching the agreed upon target of roughly 450PPM (2C levels of CO2 concentrations), there is still a 5% chance of 4 degrees warming and a 1% chance of 5 degrees warming. |The fat tails really magtter (data from Quiggin 2017)

Moreover, to suggest 2C is "very unlikely" to lead to a GCR state perhaps somewhat ignores some of the problems I say in the above response, that the chief issues of climate change are its increase in societal vulnerabilities, possibility of triggering cascading failure, and of converting civilisational collapse to irreversible civilisational collapse. Obviously a lot of this rests on what probabilities you mean; for instyance, if you mean "very unlikely" in the IPCC sense that would imply 0-10% chance, which seems awfully high. I may put 2C being highly significant in leading to a GCR in roughly 1% territory, but certainlyt not terretory that it can be ignored, although I do think most of the GCR risk comes from heavy tailed scenarios detailed above

The Threat of Climate Change Is Exaggerated

I think this is perhaps quite a simplistic reading of climate change, and whilst somewhat in line with the "community orthodoxy", I think this post and that orthodoxy is somewhat misguided.

Firstly, this post broadly ignores the concept of vulnerabilities and exposures in favour of a pure singular hazard model, which whilst broadly in line with the focus of people like Bostrom and Ord, seems overly reductive. Moreover, it seems highly unlikely that even the most damngerous  pandemic would actually cause direct human extinction, nor an ordinary nuclear war, meaning a care about only direct X-Risk really should lead to a prioritisation of omnicidal actor,  AI risk, and other speculative risks like physics experiments and selfreplicating nano-technology.  Even if you focus on the broader hazards category, climate's role as a risk factor is certainly not to be ignored, in particular I think in increasing risk of conflict and increasing the number of omnicidal actors. It should be noted, however, that X-Risk doesn't just mean human extinction, but anything which irrepairably reduces the potential of humanity. 

Once you are dealing with GCRs and societal collapse, and how this might pose an X-Risk (by conversion to irrecoverable societal collapse, which still needs more work on it), climate change rises in priority. Climate change increasing civilisational vulnerability becomes a much more serious issue, and an increase in natural disasters may be enough to cause cascading failures. If you seriously care about the collapse of our complex system, or collapses that result in mass death (not necessarily synonmous), I think these more reductionist arguments hold less sway. Whilst I won't go into the long termist argument for this in detail here, if you think it unlikely that societal recovery is in line with what is good (you might be particularly susceptable to this if you are a moral antirealist who thinks your values are mostly arbitrary ) or that societal recovery is reasonably unlikely. It also should be noted that it seems that societies struggle to recover in unstable climates, so climate change may make it even harder for societal recovery. In the article you say that the ability for climate to cause societal collapse is instead a reason to focus on the relationship between food systems and societal collapse, however climate doesn't just impact our food systems, but a huge amount of our critical systems , and just addressing food supply may lead us still vulnerable to societal collapse. (NB I think these societal collapse tendences of climate change is generally low probability, probably <10%) Climate change related vulnerabilities likely make the conversion of a GCR-> a societal collapse more likely and the conversion from societal collapse->irreversible societal collapse, as well as the conversion of shock-> GCR. Moreover, the literature on systemic risk would probably further elevate the importance of climate change. If you only care about fully wiping humanity out, because you think under almost all scenarios of GCRs/society collapse we recover to the same technological levels + in line with values you agree with, then maybe you can ignore most of this, but I tend to think such an argument is mostly implausible (I won't give this argument here)

On the topic of neglectedness, it is true that climate change as a whole is not neglected. Nonetheless, potentially high impact interventions on climate may (and may is important), still be available and neglected. Thus, don't let this general EA advice disuade you if you think you have found soemthing promising. In relation to the funding given to climate change, a lot of that is related to investment in energy generation technologies, and much pays for itself, although general climate investment is outside my area of expertise. Moreover, it is unclear how much more money on AI Safety would massively help us, although this is once again outside my expertise and I know there is a lot of disagreement on this, so take this paragraph with a little pinch of salt. 

Finally, this article general presupposes that X-Risk is high at present and that we are at "the hinge of history," presenting X-Risk work as the only outcome of longtermism. Whilst such may be a common sentiment in the community, it certainly isn't the only perspective. If for example you think X-Risk in general is low, from other longtermist perspectives, it may be the case that the destabilising effects of climate change on the globe and the global economy is indeed highly important, and then you get into the neglectedness question ie is it easier to stop the4 negative effect of climate change on GDP growth (and many interventions probably increase gdp growth as well) or just focus on gdp growth/. This is certainly not a done question, although I think John Halstead did some stuff on it which I probably need to check. 

Whilst I certainly think your argument is useful in parts, including the claim that climate change is probably overhyped, I nonetheless feel you unreasonably suggest climate change is less of an issue than it is.  Less focus on the Bostrom/Ord -esque existential hazards may be beneficial, and a greater diversification of viewpoints, including better integration of some of the arguments that the references you cite make. 

However, please don't let the overall critical tone of this comment dissuade you- its awesome to see people new to EA writing such genuinely well researched and well written posts on the forum (I certainly haven't had the bravery to post something on here yet!) Keep up the good work despite my critiscms.