Matt Boyd

Health, technology and catastrophic risk - New Zealand https://adaptresearchwriting.com/blog/

Wiki Contributions

Comments

A Sequence Against Strong Longtermism

Thanks for collating all of this here in one place. I should have read the later posts before I replied to the first one. Thank you too for your bold challenge. I feel like Kant waking from his 'dogmatic slumber'. A few thoughts:

  1. Humanity is an 'interactive kind' (to use Hacking's term). Thinking about humanity can change humanity, and the human future.
  2. Therefore, Ord's 'Long Reflection' could lead to there being no future humans at all (if that was the course that the Long Reflection concluded). 
  3. This simple example shows that we cannot quantify over future humans, quadrillions or otherwise, or make long term assumptions about their value. 
  4. You're right about trends, and in this context the outcomes are tied up with 'human kinds', as humans can respond to predictions and thereby invalidate the prediction. Makes me think of Godfrey-Smith's observation that natural selection has no inertia, change the selective environment and the observable 'trend' towards some adaptation (trend) vanishes. 
  5. Cluelessness seems to be some version of the Socratic Paradox (I know only that I know nothing).
  6. RCTs don't just falsify hypotheses, but also provide evidence for causal inference (in spite of hypotheses!) 
A case against strong longtermism

Hi Vaden, 

I'm a bit late to the party here, I know. But I really enjoyed this post. I thought I'd add my two cents worth. Although I have a long term perspective on risk and mitigation, and have long term sympathies, I don't consider myself a strong longtermist. That said, I wouldn't like to see anyone (eg from policy circles) walk away from this debate with the view that it is not worth investing resources in existential risk mitigation. I'm not saying that's what necessarily comes through, but I think there is important middle ground (and this middle ground may actually instrumentally lead to the outcomes that strong longtermists favour, without the need to accept the strong longtermist position). 

I think it is just obvious that we should care about the welfare of people here and now. However, the worst thing that can happen to people existing now is for all of them to be killed. So it seems clear that funnelling some resources into x-risk mitigation, here and now, is important. And the primary focus should always be those x-risks that are most threatening in the near term (and the target risks will no doubt change with time, eg I would say it is biotechnology in the next 5-10 years, then perhaps climate or nuclear, and then AI, followed by rarer natural risks, or emerging technological risks, etc while all the while building cross-cutting defences such as institutions and resilience). As you note, every generation becomes the present generation and every x-risk will have it's time. We can't ignore future x-risks, for this very reason. Each future risk 'era' will become present and we had better be ready. So resources should be invested in future x-risks, or at least in understanding their timing. 

The issue I have with strong-longtermism lies in the utility calculations. The Greaves/MacAskill paper presents a table of future human lives that is based on the carrying capacity of the Earth, solar system, etc. However, even here today we do not advocate some imperative that humans must reproduce right up to the carrying capacity of the Earth. In fact many of us think this would be wrong for a number of reasons. To factor 'quadrillions' or any definite number at all into the calculations is to miss the point that we (the moral agents) get to determine (morally speaking) the right number of future people, and we might not know how many this is yet. Uncertainty about moral progress means that we cannot know what the morally correct number is, because theory and argument might evolve across time (and yes, it's probably obvious but I don't accept that non-actual, and never-actual people can be harmed, and I don't accept that non-existence is a harm). 

However, there seems to be value in SOME humans persisting in order that these projects might be continued and hopefully resolved. Therefore, I don't think we should be putting speculative utilities into our 'in expectation' calculations. There are independent arguments for preventing x-risk than strong-longtermism, and the emotional response it generates from many, potentially including aversive policymakers makes it a risky strategy to push. Even if EA is to be motivated by strong-longtermism, it may be useful to advocate an 'instrumental' theory of value in order to achieve the strong-longtermist agenda. There is a possibility that some of EA's views can themselves be an information hazard. Being right is not always being effective, and therefore not always altruistic. 

**

Is SARS-CoV-2 a modern Greek Tragedy?

Thanks for this response. I guess the motivation for me writing this yesterday was a comment from a member of NZ's public sector, who said basically 'the Atomic Scientists article falls afoul of the principle of parsimony'. So I wanted to give the other side, ie there actually are some reasons to think lab-leak rather than parsimonious natural explanation. So I completely take your point about balance, but the idea is part of a dialogue rather than a comprehensive analysis, that could have been clearer. Cheers. 

Is SARS-CoV-2 a modern Greek Tragedy?

Thanks for these. Super interesting credences here, 19% (that health organisations will conclude lab origin) to 83% (that gain of function was in fact contributory). I guess the strikingly wide range suggests genuine uncertainty. Watch this space with interest. 

Are Humans 'Human Compatible'?

Great additional detail, thanks!

Eight high-level uncertainties about global catastrophic and existential risk

Another one to consider, assuming you see it at the same level of analysis as the 8 above, is the spatial trajectory through which the catastrophe unfolds. E.g. a pandemic will spread from an origin(s) and I'm guessing is statistically likely to impact certain well-connected regions of the world first. Or a lethal command to a robot army will radiate outward from the storage facility for the army. Or nuclear winter will impact certain regions sooner than others. Or Ecological collapse due to an unstoppable biological novelty will devour certain kinds of environment more quickly (same possibly for grey goo), etc. There may be systematic regularities to which spaces on Earth are affected and when. Currently completely unknown. But knowledge of these patterns could help target certain kinds of resilience and mitigation measures to where they are likely to have time to succeed before themselves being impacted.