John G. Halstead

8520Joined Jan 2017

Bio

John Halstead - Research Fellow at the Forethought Foundation. Formerly Head of Applied Research at Founders Pledge and researcher at Centre for Effective Altruism DPhil in political philosophy from Oxford

Comments
628

hi david, this was from before he was banned from the forum but after his beef with me started - this was while he was doing all the white supremacy articles about me, beckstead and others. he had a long-standing dispute with the people mentioned, and independently at the time he was especially annoyed at me for criticising him. I think that is what led him to namecheck me in his allegation.

I hadn't heard of one of the people he was accusing at the time that he wrote the facebook post. I have no idea whether or not the allegations are true, I just don't understand why he involved me in them. 

Given that people are sharing evidence on Torres, I thought I would chime in. I agree it would have been better for the OP to share with Zoe before posting, but I also think working with Torres is a mistake. 

My relationship with Torres started after I criticised something he wrote about Steven Pinker on Facebook - my critique was about 3 sentences. My critique was supported by others in the community, including Will MacAskill. I think this was the start of Torres becoming disenchanted with EA. 

From this point on, he published several now infamous pieces suggesting that I and others in EA support white supremacy. He also sent me numerous messages on Facebook after I had stopped responding. In this Facebook post, Torres inexplicably namechecks me while he is accusing some people of being rapists/paedophiles (their names are redacted)

My whole experience with Torres has been surreal - for one small piece of criticism, he went after me for years. I know he has done the same to others: some people he has gone after have needed counselling, and I think people should take that into account when they interact with Torres. 

For people who are confused that Torres, who wrote a book defending the FHI-house view of x-risk in 2017 and endorsed that view until his review of Pinker in 2019, now thinks EA is so bad, it seems to be because he thinks he faced some rejection by the community. 

I would say that for all of the 'non-EA' reviewers, the review was very extensive, and this was also true of some of the EA reviewers (because they were more pushed for time). The non-EA expert reviewers were also compensated for their review in order to incentivise them to review in depth. 

It is true that I ultimately decided whether or not to publish, so this makes it different to peer review. Someone mentioned to me that some people mean by 'peer review' that the reviewers have to agree for publication to be ok, but this wasn't the case for this report. Though it was reviewed  experts, ultimately I decided whether or not to publish in its final state. 

Thanks for this. Someone else raised some issues with the moist greenhouse bit, and I need to revise. I still think the Ord estimate is too high, but I think the discussion in the report could be crisper. I'll revert back once I've made changes

I'd say the depth of review was similar to peer review yes, though it is true to say that publication was not conditional on the peer reviewers okaying what I had written. As mentioned, the methodology was reviewed, yes. So, this is my view, having taken on significant expert input. 

A natural question is whether my report should be given less weight eg than a peer reviewed paper in a prominent journal. I think as a rule, a good approach is to try start by getting a sense of what the weight of the literature says, and then exploring the substantive arguments made. For the usual reasons, we should expect any randomly selected paper to be false. Papers that make claims far outside the consensus position that get published in prominent journals are especially likely to be false. There is also scope for certain groups of scientists to review one another's papers such that bad literatures can snowball. 

This isn't to say that any random person writing about climate change will be better than a random peer reviewed paper. But I think there are reasons to put more weight on the views of someone who has good epistemics (not saying this is true of me, but one might think it is true of some EA researchers) and also be actually talking about the thing we are interested in - i.e. the longtermist import of climate change. Most papers just aren't focusing on that, but will use similar terminology. e.g. there is a paper by Xu and Ramanathan which says that climate change is an existential risk but uses that term in a completely different way to EAs. 

I will give some examples of the flaws of the traditional peer review process as applied to some papers on the catastrophic side of things. 

  1. A paper that is often brought up in climate catastrophe discussions is Steffen et al (2018) - the 'Hothouse Earth' paper. That paper has now been cited more than 2,000 times. For reasons I discuss in the report, I think it is both surprising that the paper was  published. The IPCC also disagrees with it.

2. The Kemp et al 2022 PNAS paper (also written by many planetary boundaries people) was peer reviewed, but also contains several  errors.

For instance, it says "Yet, there remain reasons for caution. For instance, there is significant uncertainty over key variables such as energy demand and economic growth. Plausibly higher economic growth rates could make RCP8.5 35% more likely (27)." 

The cite here in note (27) is to Christensen et al (2018), which actually says "Our results indicate that there is a greater than 35% probability that emissions concentrations will exceed those assumed in RCP8.5." i.e. their finding is about the percentage point chance of RCP8.5, not about an increase in the relative risk of RCP8.5. 

Another example: "While an ECS below 1.5 °C was essentially ruled out, there remains an 18% probability that ECS could be greater than 4.5 °C (14)." 

The cite here is to the entire WG1 IPCC report (not that useful for checking but that aside...) The latest IPCC report says "a best estimate of equilibrium climate sensitivity of 3°C, with a very likely range of 2°C to 5°C. The likely range [is] 2.5°C to 4°C" The IPCC says "Throughout the WGI report and unless stated otherwise, uncertainty is quantified using 90% uncertainty intervals. The 90% uncertainty interval, reported in square brackets [x to y], is estimated to have a 90% likelihood of covering the value that is being estimated. The range encompasses the median value, and there is an estimated 10% combined likelihood of the value being below the lower end of the range (x) and above its upper end (y). Often, the distribution will be considered symmetric about the corresponding best estimate, but this is not always the case. In this Report, an assessed 90% uncertainty interval is referred to as a ‘very likely range’. Similarly, an assessed 66% uncertainty interval is referred to as a ‘likely range’.

So, the 66% CI is 2.5ºC to 4ºC and the 90% CI is 2ºC-5ºC. If this is symmetric, then this means there is a 17% chance of >4ºC, and a 5% chance of >5ºC. It's unclear whether the distribution is symmetric or not - the IPCC does not say - but if it is then the '18% chance of >4.5ºC' claim in climate endgame is wrong. So, a key claim in that paper  - about the main variable of interest in climate science - cannot be inferred from the given reference. 

3. Jehn et al have published two papers cited in Kemp et al (2022), one of which says that "More likely higher end warming scenarios of 3 °C and above, despite potential catastrophic impacts, are severely neglected." This is just not true, but nevertheless made it through peer review. Almost every single climate impact study reports the impact of 4.4ºC. There is barely a single chart in the entire IPCC impacts report that does not report that. We can perhaps quibble over what 'severely neglected' means, but it  doesn't mean 'shown in every single chart in the IPCC climate impacts book'.  It is surprising that this got through peer review. 

**

As I have said, these are just single studies. I am consistently impressed by how good the IPCC is at reporting the median view in the literature, given how politicised the whole process must be. 

**

I also do not think there is any tendency to downplay risks in the climate science literature. If you look at studies on publication bias in climate science, they find that effect sizes in abstracts in climate change papers have a tendency to be significantly inflated relative to the main text. This is especially pronounced in high impact journals. I have also found this from personal experience. Overall, I think in some cases the risks are overstated, in some they are understated, but there is no systematic pattern. 

Probably the best way to examine whether my substantive conclusions are wrong would be to raise some substantive criticisms/carry out a redteam - I would welcome this. I emphasise that if my arguments are correct, then the scale of biorisk is numerous orders of magnitude larger than climate change. 

One issue I have with these arguments for pluralism and for sometimes obeying something like common sense morality for its own sake and independent of utilitarian justification is that common sense morality is crazy/impossible to follow in almost all normal decision situations if you think  it's implications through properly.

One argument for this is MacAskill's argument that deontology required paralysis. Every time you leave the house, you foreseeably cause someone to die by changing the flow of traffic. Cars also contribute to air pollution which foreseeably kills people, suggesting that emitting any amount of pollution is impermissible. This violates nonconsequentialist side-constraints. I don't understand how you can give some weight to this type of view. 

 This is not the point that we should follow utilitarian morality when the stakes are high from a utilitarian point of view. 

This is a very thoughtful critique. What do you make of the argument that The Precipice and WWOTF work well together as a partnership that target different markets and could be introduced at different stages as people get into EA?

"Refugees: ~216 million climate refugees by 2050 (World Bank Groundswell Report) caused by droughts and desertification, sea-level rise, coastal flooding, heat stress, land loss, and disruptions to natural rainfall patterns"

The groundswell report is about voluntary internal migration, so it is not about refugees, who are typically defined as involuntarily displaced people crossing national borders. 

I thought that was what was meant by AGI? I agree that the operationalisation doesn't state that explicitly, but I thought it was implied. Do you think I should change in the report?

Which impacts do you think I have missed? Can you explain why the perspective you take would render any of my substantive conclusions false?

I'm not sure what you're talking about with self-citation. When do I cite myself?

Another way to look at it is to think about the impacts including in climate-economy models. Takakura et al (2019), which is one of the more comprehensive, includes:

  • Fluvial flooding
  • Coastal inundation
  • Agriculture
  • Undernourishment
  • Heat-related excess mortality
  • Cooling/heating demand
  • Occupational-health costs
  • Hydroelectric generation capacity
  • Thermal power generation capacity

I discuss all of those except cooling/heating demand and hydro/thermal generation capacity, as they seem like small factors relative to climate risk. In addition to that, I discuss tipping points, runaway greenhouse effects, crime, civil and interstate conflict, ecosystem collapse. 
 

Load More