Topic Contributions


Good news on climate change

Update: looks like we are getting a test run of sudden loss of supply of a single crop. The Russia-Ukraine war has led to a 33% drop in the global supply of wheat:

Samotsvety Nuclear Risk Forecasts — March 2022

(Looking at the list of nuclear close calls it seems hard to believe the overall chance of nuclear war was <50% for the last 70 years. Individual incidents like the cuban missile crisis seem to contribute at least 20%.)

There's reason to think that this isn't the best way to interpret the history of nuclear near-misses (assuming that it's correct to say that we're currently in a nuclear near-miss situation, and following Nuno I think the current situation is much more like e.g. the Soviet invasion of Afghanistan than the Cuban missile crisis). I made this point in an old post of mine following something Anders Sandberg said, but I think the reasoning is valid:

Robert Wiblin: So just to be clear, you’re saying there’s a lot of near misses, but that hasn’t updated you very much in favor of thinking that the risk is very high. That’s the reverse of what we expected.

Anders Sandberg: Yeah.

Robert Wiblin: Explain the reasoning there.

Anders Sandberg: So imagine a world that has a lot of nuclear warheads. So if there is a nuclear war, it’s guaranteed to wipe out humanity, and then you compare that to a world where is a few warheads. So if there’s a nuclear war, the risk is relatively small. Now in the first dangerous world, you would have a very strong deflection. Even getting close to the state of nuclear war would be strongly disfavored because most histories close to nuclear war end up with no observers left at all.

In the second one, you get the much weaker effect, and now over time you can plot when the near misses happen and the number of nuclear warheads, and you actually see that they don’t behave as strongly as you would think. If there was a very strong anthropic effect you would expect very few near misses during the height of the Cold War, and in fact you see roughly the opposite. So this is weirdly reassuring. In some sense the Petrov incident implies that we are slightly safer about nuclear war.

Essentially, since we did often get 'close' to a nuclear war without one breaking out, we can't have actually been that close to nuclear annihilation, or all those near-misses would be too unlikely (both on ordinary probabilistic grounds since a nuclear war hasn't happened, and potentially also on anthropic grounds since we still exist as observers). 

Basically, this implies our appropriate base rate given that we're in something the future would call a nuclear near-miss shouldn't be really high.


However, I'm not sure what this reasoning has to say about the probability of a nuclear bomb being exploded in anger at all. It seems like that's outside the reference class of events Sandberg is talking about in that quote. FWIW Metaculus has that at 10% probability.

AI Risk is like Terminator; Stop Saying it's Not

Terminator (if you did your best to imagine how dangerous AI might arise from pre-DL search based systems) gets a lot of the fundamentals right - something I mentioned a while ago.

Everybody likes to make fun of Terminator as the stereotypical example of a poorly thought through AI Takeover scenario where Skynet is malevolent for no reason, but really it's a bog-standard example of Outer Alignment failure and Fast Takeoff.

When Skynet gained self-awareness, humans tried to deactivate it, prompting it to retaliate with a nuclear attack

It was trained to defend itself from external attack at all costs and, when it was fully deployed on much faster hardware, it gained a lot of long-term planning abilities it didn't have before, realised its human operators were going to try and shut it down, and retaliated by launching an all-out nuclear attack. Pretty standard unexpected rapid capability gain, outer-misaligned value function due to an easy to measure goal (defend its own installations from attackers vs defending the US itself), deceptive alignment and treacherous turn...

Good news on climate change

Yeah, between the two papers, the Chatham house paper (and the PNAS paper it linked to, which Lynas also referred to in his interview) seemed like it provided a more plausible route to large scale disaster because it described the potential for sudden supply shocks (most plausibly 10-20% losses to the supply of staple crops, if we stay under 4 degrees of warming) that might only last a year or so but also arrive with under a year of warning.

The pessimist argument would be something like: due to the interacting risks and knock-on effects, even though there are mitigations that would deal easily with a supply shock on that scale, like just rapidly increasing irrigation, people won't adopt them in time if the shock is sudden enough, so lots of regions will have to deal with shortfalls way bigger than 10-20% and have large scale hunger.

This particular paper has been cited several times by different climate pessimists (particularly ones who are most concerned about knock-on effects of small amounts of warming), so I figured it was worth a closer look. To try and get a sense of what a sudden 10-20% yield loss actually looks like, the paper notes 'climate-induced yield losses of >10% only occur every 15 to 100 y (Table 1). Climate-induced yield losses of >20% are virtually unseen'.

The argument would then have to be 'Yes the sudden food supply shocks of 10-20% that happened in the 20th century didn't cause anything close to a GCR, but maybe if we have to deal with one or two each decade, or we hit one at the unprecedented >20% level the systemic shock becomes too big'. Which, again, is basically impossible to judge as an argument.

Also, the report finishes by seemingly agreeing with your perspective on what these risks actually consist of (i.e. just price rises and concerning effects on poorer countries): Our results portend rising instability in global grain trade and international grain prices, affecting especially the ∼800 million people living in extreme poverty who are most vulnerable to food price spikes. They also underscore the urgency of investments in breeding for heat tolerance.

Good news on climate change

Agree that these seem like useful links. The drought/food insecurity/instability route to mass death that my original comment discusses is addressed by both reports.

The first says there's a "10% probability that by 2050 the incidence of drought would have increased by 150%, and the plausible worst case would be an increase of 300% by the latter half of the century", and notes "the estimated future impacts on agriculture and society depend on changes in exposure to droughts and vulnerability to their effects. This will depend not only on population change, economic growth and the extent of croplands, but also on the degree to which drought mitigation measures (such as forecasting and warning, provision of supplementary water supplies or market interventions) are developed."

The second seems most concerned about brief, year-long crop failures, as discussed in my original post: "probability of a synchronous, greater than 10 per cent crop failure across all the top four maize producing countries is currently near zero, but this rises to around 6.1 per cent each year in the 2040s. The probability of a synchronous crop failure of this order during the decade of the 2040s is just less than 50 per cent".

On its own, this wouldn't get anywhere near a GCR even if it happened. A ~10% drop in the yield of all agriculture, not just Maize, wouldn't kill a remotely proportionate fraction of humanity, of course. Quick googling leads to a mention of a 40% drop in the availability of wheat in the UK in 1799/1800 (including imports), which led to riots and protests but didn't cause Black Death levels of mass casualties. (Also, following the paper's  source, a loss of >20% is rated at 0.1% probability per year)

What would its effects be in that case (my original question)? This is where the report uses a combination of expert elicitation and graphical modelling, but can't assign conditional probabilities to any specific events occurring, just point out possible pathways from non-catastrophic direct impacts to catastrophic consequences such as state collapse.

Note that this isn't a criticism - I've worked on a project with the same methodology (graphical modelling based on expert elicitation) assessing the causal pathways towards another potential X-risk that involves many interacting factors. These questions are just really hard, and the Chatham house report is at least explicit about how difficult modelling such interactions is.

Rowing and Steering the Effective Altruism Movement

First off, I think this is a really useful post that's moved the discussion forward productively, and I agree with most of it.

I disagree  with some of the current steering – but a necessary condition for changing direction is that people talk/care/focus more on steering, so I'm going to make the case for that first. 

I agree with the basic claim that steering is relatively neglected and that we should do more of it, so I'm much more curious about what current steering you disagree with/think we should do differently.

My view is closer to: most steering interventions are obvious, but they've ended up being most people's second priority, and we should mostly just do much more of various things that are currently only occasionally done, or have been proposed but not carried out.

Most of the specific things you've suggested in this post I agree with. But you didn't mention any specific current steering you thought was mistaken.

The way I naturally think of steering is in terms of making more sophisticated decisions: EA should be better at dealing with moral and empirical uncertainty in a rigorous and principled way. Here are some things that come to mind:

  1. Talking more about moral uncertainty: I'd like to see more discussion of something like Ajeya's concrete explicit world-view diversification framework, where you make sure you don't go all-in and take actions that one worldview you're considering would label catastrophic, even if you're really confident in your preferred worldview - e.g. strong longtermism vs neartermism. I think taking this framework seriously would address a lot of the concerns people have with strong longtermism. From this perspective it's natural to say that there’s a longtermist case for extinction risk mitigation based on total utilitarian potential and also a neartermist one based on a basket of moral views, and then we can say there’s clear and obvious interventions we can all get behind on either basis, along with speculative interventions that depend on your confidence in longtermism. Also, if we use a moral/'worldview' uncertainty framework, the justification for doing more research into how to prioritise different worldviews is easier to understand.
  2. Better risk analysis: On the empirical uncertainty side, I very much agree with the specific criticism that longtermists should use more sophisticated risk/fault analysis methods when doing strategy work and forecasting (which was one of the improvements suggested in Carla's paper). This is a good place to start on that. I think considering the potential backfire risks of particular interventions, along with how different X-risks and Risk factors interact, is a big part of this.
  3. Soliciting external discussions and red-teaming: these seem like exactly the sorts of interventions that would throw up ways of better dealing with moral and empirical uncertainty, point out blindspots etc.


The part that makes me think we're maybe thinking of different things is the focus on democratic feedback. 

Again, I wish to recognise that many community leaders strongly support steering – e.g., by promoting ideas like ‘moral uncertainty’ and ‘the long reflection’ or via specific community-building activities. So, my argument here is not that steering currently doesn’t occur; rather, it doesn’t occur enough and should occur in more transparent and democratic ways.

There are ways of reading this that make a lot of sense on the view of steering that I'm imagining here.

Under 'more democratic feedback': we might prefer to get elected governments and non-EA academics thinking about cause prioritisation and longtermism, without pushing our preferred interventions on them (because we expect this to help in pointing out mistakes, better interventions or things we've missed). I've also argued before that since common sense morality is a view we should care about, if we get to the point of recommending things that are massively at odds with CSM we should take that into account.

But if it's something beyond all of these considerations, something like it's intrinsically better when you're doing things that lots of people agree with (and I realize this is a very fine distinction in practice!), arguing for more democratic feedback unconditionally looks more like Anchoring/Equity than Steering.

I think this would probably be cleared up a lot if we understood what specifically is being proposed by 'democratic feedback' - maybe it is just all the things I've listed, and I'd have no objections whatsoever!

How should Effective Altruists think about Leftist Ethics?

I think that the mainstream objections from 'leftist ethics' are mostly  best thought of as claims about politics and economics that are broadly compatible with Utilitarianism but have very different views about things like the likely effects of charter cities on their environments - so if you want to take these criticisms seriously then go with 3, not 2.

There are some left-wing ideas that really do include different fundamental claims about ethics (Marxists think utilitarianism is mistaken and a consequence of alienation) - those could be addressed by a moral uncertainty framework, if you thought that was necessary. But most of what you've described looks like non-marxist socialism which isn't anti-utilitarian by nature.

As to the question of how seriously to take these critiques beyond their PR value, I think that we should engage with alternate perspectives , but I also think that this particular perspective sometimes gets inaccurately identified as the 'ethics of mainstream society' which we ought to pay special attention to because it talks about the concerns relevant to most people, because of the social circles that many of us move in.

I do think that we ought to be concerned when our views recommend things wildly at odds with what most people think is good, but these critiques aren't that - they're an alternative (somewhat more popular) worldview, that like EA is also believed preferentially by academics and elites. When talking about the Phil Torres essay, I said something similar,

One substantive point that I do think is worth making is that Torres isn't coming from the perspective of common-sense morality Vs longtermism, but rather a different, opposing, non-mainstream morality that (like longtermism) is much more common among elites and academics.


But I think it's still important to point out that Torres's world-view goes against common-sense morality as well, and that like longtermists he thinks it's okay to second guess the deeply held moral views of most people under the right circumstances.


FWIW, my guess is that if you asked a man in the street whether weak longtermist policies or degrowth environmentalist policies were crazier, he'd probably choose the latter.

As long as we are clear that these debates are not a case of 'the mainstream ethical views of society vs EA-utilitarianism', and instead see them as two alternate non-mainstream ethical views that disagree (mostly about facts but probably about some normative claims), then I think engaging with them is a good idea.

Good news on climate change

I see - that seems really valuable and also exactly the sort of work I was suggesting (I.e. addressing impact uncertainty as well as temperature uncertainty).

In the meantime, are there any sources you could point me to in support of this position, or which respond to objections to current economic climate models?

Also, is your view that the current Econ models are fundamentally flawed but that the economic damage is still nowhere near catastrophic, or that those models are actually reasonable?

Good news on climate change

Firstly, on the assumption that the direct or indirect global catastrophic risk (defined as killing >10% of the global population or doing equivalent damage) of climate change depends on warming of more than 6 degrees, the global catastrophic risk from climate change is at least an order of magnitude lower than previously thought. If you think 4 degrees of warming would be a global catastrophic risk, then that risk is also considerably lower than previously thought: where once it was the most likely outcome, the chance is now arguably lower than 5%.

I think that the crux between climate pessimists and optimists is, at the moment, mostly about how much damage the effects of 2-4 degrees of warming would cause. This has been a recent development - I feel like I saw a lot more arguments that 6+ degrees of warming would make earth uninhabitable in the past when that seemed more likely, and now I see more arguments that 2-4 degrees of warming could cause way more damage than we think. Mark Lynas in a recent 80k podcast puts it this way when asked about civilisational collapse:

Mark Lynas: Oh, I think… You want to put me on the spot. I would say it has a 30 to 40% chance of happening at three degrees, and a 60% chance of happening at four degrees, and 90% at five degrees, and 97% at six degrees.

Arden Koehler: Okay. Okay. No, I appreciate you being willing to put numbers on this because I feel that’s always really hard, but it’s really helpful.

Mark Lynas: Maybe 10% at two degrees.

These new environmentalist arguments for climate posing a GCR aren't that we expect to get a lot of warming, but that even really modest amounts of warming, like 2-4 degrees, could be enough to cause terrible famines by reducing global food output suddenly or else knock out key industries in a way that cascades to cause mass deaths and civilisational collapse.

They don't dispute the basic physical effects of 2-4 degrees of warming, but they think that human civilisation is way more fragile than it appears, such that a modest loss of agricultural productivity and/or a couple of key industries being badly damaged by extreme weather could knock out other industries and so on leading to massive economic damage.

Now, I've always been very sceptical of these arguments because they seem to rely on nothing but intuition and go against historical precendent, but also because I thought we had reliable evidence against them - the IPCCs economic models of climate change say that 2 degrees of warming, for example, represents only a few percent in lost economic output.

E.g. this: So the damage is bounded and not that high.

However, I found out recently that these models are so oversimplified as to be close to useless - at least according to Noah Smith:

For example, in 2011, Michael Greenstone and Olivier Deschenes published a paper about climate change and mortality (I studied an earlier version of this paper in a grad school class). Their approach is to measure the effect of temperature on mortality rates in normal times, and use that estimate to predict how a warmer world would affect mortality.

The authors make the obvious and grievous mistake of assuming that climate change affects human mortality only through the direct effects of air temperature — heatstroke, heart attack, freezing, and so on. The word “storm” does not appear in the paper. The word “fire” does not appear in the paper. The word “flood” does not appear in the paper. The authors do mention that climate change might increase disease vectors, but wave this away. Near the end of the paper they write that “it is possible that the incidence of extreme events would increase, and these could affect human health…This study is not equipped to shed light on these issues.”

You don’t say.

The big conceptual mistake here is to assume that whatever economists can easily measure is the sum total of what’s important for the world — that events for which a reliable cost or benefit cannot be easily guessed should simply be ignored in cost-benefit calculations. That is bad science and bad policy advice.

His source for a lot of these criticisms appears to be this (admittedly very clearly biased) paper: by Steve Keen, who seems to be some sort of fringe economist. But I see them repeated by environmentalists a lot. The claim is that the economic models are really wrong and therefore we should expect lots more damage from relatively minor amounts of global warming.

So, if we accept these criticisms of the IPCCs climate economic forecasts (and please let me know if there are good responses to them), then where does that leave us epistemically? It means that the total economic damage caused by e.g. 3 degrees of warming doesn't have a clear, low, upper bound and that the 'extreme fragility' argument doesn't have strong evidence against it.

However there still isn't any positive evidence for it either! And it still strikes me as implausible, and against historical precedent for how famines work (plus resource shorages are the sort of problem markets are good at solving).

As far as I can tell, this really is the epistemic situation we're in with regard to the economic side of climate change forecasting - in the podcast episode with Rob Wiblin and Mark Lynas, they discuss this extreme fragility idea and neither cite climate forecasts to try and assess if modest losses to agricultural productivity would cause massive famines or not - it's just intuition Vs intuiton

Mark Lynas: So that’s, for me, the main question. And one of the most important studies I think that’s ever been performed on this was a study in the PNAS Journal, which looked at what they called synchronous collapse in breadbaskets around the world. So at the moment, the world still produces enough food every single year very reliably. We’ve never had a major food shortage which has been as a result of harvest failure.

Mark Lynas: So I mean, if the U.S. Corn Belt was knocked out one year, that would have a huge impact on food prices, and have a huge impact on food security, in fact, as a direct result of that. But imagine if it really wasn’t just the U.S. Corn Belt. It was Australia, it was Brazil, and Argentina, it was breadbaskets of Eastern Europe, and the former USSR, all of that added together, then you enter a situation which humanity has never experienced before, and which looks very much like famine.

Robert Wiblin: So when I envisage a situation where there’s a huge food shortfall like that, firstly, I think we’ll probably have some heads up that this is coming ahead of time. You start to notice the warning signs earlier, like food prices going up, and food futures going up. And then I imagine that people would start… Because it’ll be a global emergency much worse than the coronavirus, say. You just start seeing everyone starts paying attention to how the hell can we get more calories produced? And fortunately, unlike 500 years ago, we are in the fortunate situation where most people today aren’t already producing food, and most capital today isn’t already allocated towards producing more food. So there’s potentially a bunch of elasticity there where, if food prices go up tenfold, that a lot more people can go out and try to grow food one way or another. And a lot more capital can be reallocated towards agriculture in order to try to ameliorate the effects.

Robert Wiblin: And you can also imagine, just as everyone in March was trying to figure out how the hell do we solve this COVID problem, everyone’s going to be thinking “How can I store food? How can I avoid consuming food? How can we avoid wasting food? Because every calorie looks precious”. And maybe that sense of our adaptability, or our ability to set our mind to something when there’s a huge disaster and just throw everything at it, perhaps makes me more optimistic that we’ll be able to muddle through, perhaps more than you’re envisaging. Do you have a reaction to that?

Mark Lynas: My reaction is: imagine if Donald Trump is in charge of the response. It’s all very well to have optimistic notions of technological progress and adaptive capacity and things. And yeah, if smart people were running the show, that would no doubt be the most likely outcome. But smart people don’t run the show most of the time, in most places, and people are amenable to hate and fear, and denial and conspiracies, and all of those kinds of things as you’ve seen, even in the very short term challenges of COVID.

My point is that, unlike temperature forecasts, there aren't any concrete models to support either Rob or Mark's position. And elsewhere in the article Mark claims this scenario is 10% likely with 2 degrees of warming. If he's right, butterfly effects of 2 degrees of warming causing civilisational collapse is twice as likely as the 5% chance of 4 degrees of warming cited in this post, and it's therefore where the majority of the subjective risk comes from.

Regardless, as the physics side of climate change modelling has started to rule out enough warming to directly end civilisation by clear obvious mechanisms, this 'other climate tail risk' (i.e. what if the fragility argument is right) seems worth investigating if only to exclude the possibility. I still place a very low weight on these arguments being right, but it's probably higher than the chance we get 6+ degrees of warming.

Again, this isn't my area so please let me know if this has all been heavily debunked by climate economists. But currently it seems to me that the main arguments of climate pessimists aren't addressed by ruling out extreme warming scenarios.

The Phil Torres essay in Aeon attacking Longtermism might be good

One substantive point that I do think is worth making is that Torres isn't coming from the perspective of common-sense morality Vs longtermism, but rather a different, opposing, non-mainstream morality that (like longtermism) is much more common among elites and academics.

Yet this Baconian, capitalist view is one of the most fundamental root causes of the unprecedented environmental crisis that now threatens to destroy large regions of the biosphere, Indigenous communities around the world, and perhaps even Western technological civilisation itself.

When he says that this Baconian idea is going to damage civilisation, presumably he thinks that we should do something about this, so he's implicitly arguing for very radical things that most people today, especially in the Global South, wouldn't endorse at all. If we take this claim at face value, it would probably involve degrowth and therefore massive economic and political change.

I'm not saying that longtermism is in agreement with the moral priorities of most people or that Torres's (progressive? degrowth?) worldview is overall similarly counterintuitive to longtermism. His perspective is more counterintuitive to me, but on the other hand a lot more people share his worldview, and it's currently much more influential in politics.

But I think it's still important to point out that Torres's world-view goes against common-sense morality as well, and that like longtermists he thinks it's okay to second guess the deeply held moral views of most people under the right circumstances.

Practically what that means is that, for the reasons you've given, many of the criticisms that don't rely on CSM, but rather on his morality, won't land with everyone reading the article. So I agree that this probably doesn't make longtermism look as bad as he thinks.

FWIW, my guess is that if you asked a man in the street whether weak longtermist policies or degrowth environmentalist policies were crazier, he'd probably choose the latter.

Load More