Matt Boyd

185Joined Nov 2018

Bio

Health, technology and catastrophic risk - New Zealand https://adaptresearchwriting.com/blog/

Comments
31

Hi Ross, here's the paper that I mentioned in my comment above (this pre-print uses some data from Xia et al 2022 in its preprint form, and their paper has just been published in Nature Food with some slightly updated numbers, so we'll update our own once the peer review comes back, but the conclusions etc won't change): https://www.researchsquare.com/article/rs-1927222/v1

We're now starting a 'NZ Catastrophe Resilience Project' to more fully work up the skeleton details that are listed in Supplementary Table S1 of our paper. Engaging with public sector, industry, academia etc. Australia could do exactly the same. 

Note that in the Xia paper, NZ's food availability is vastly underestimated due to quirks of the UNFAO dataset. For an estimate of NZ's export calories see our paper here: https://www.medrxiv.org/content/10.1101/2022.05.13.22275065v1 

And we've posted here on the Forum about all this here: https://forum.effectivealtruism.org/posts/7arEfmLBX2donjJyn/islands-nuclear-winter-and-trade-disruption-as-a-human 

I generally think that all these kinds of cost-effectiveness analyses around x-risk are wildly speculative and susceptible to small changes in assumptions. There is literally no evidence that the $250b would change bio-x-risk by 1% rather than, say, 0.1% or 10%, or even 50%, depending on how it was targeted and what developments it led to. On the other hand if you do successfully reduce the x-risk by, say, 1%, then you most likely also reduce the risk/consequences of all kinds of other non-existential bio-risks, again depending on the actual investment/discoveries/developments, so the benefit of all the 'ordinary' cases must be factored in. I think that the most compelling argument for investing in x-risk prevention without consideration of future generations, is simply to calculate the deaths in expectation (eg using Ord's probabilities if you are comfortable with them) and to rank risks accordingly. It turns out that at 10% this century, AI risks 8 million lives per annum (obviously less than that early century, perhaps greater late century) and bio-risk is 2.7 million lives per annum in expectation (ie 8 billion  x 0.0333 x 0.01). This can be compared to ALL natural disasters which Our World in Data reports kill ~60,000 people per annum. So there is an argument that we should focus on x-risk to at least some degree purely on expected consequences. I think its basically impossible to get robust cost-effectiveness estimates for this kind of work, and most of the estimates I've seen appear implausibly cost-effective. Things never go as well as you though they would in risk mitigation activities. 

Hi Christian, thanks for your thoughts. You're right to note that islands like Iceland, Indonesia, NZ, etc are also where there's a lot of volcanic activity. Mike Cassidy and Lara Mani briefly summarize potential ash damage in their post on supervolcanoes here (see the table on effects). Basically there could be severe impacts on agriculture and infrastructure. I think the main lesson is that at least two prepared islands would be good. In different hemispheres. That first line of redundancy is probably the most important (also in case one is a target in nuclear war, eg NZ is probably susceptible to an EMP directed at Australia). 

That's true in theory. But in practice there are only a (small) finite number of items on the list (those that have been formally investigated with a cost-effectiveness analysis). So once those are all funded, then it would make sense to fund more cost-effectiveness analyses to grow the table.  We don't know how 'worthwhile' it is to fund most things, so they are not on the table. 

Yes, absolutely, and in almost all cases in health the list of desirable things outstrips the funding bar. The 'league table' of interventions is longer than the fraction of them that are/can be funded. So in health there is basically never an overhang. The same will be true for EA/GCR/x-risk projects too. So I agree there is likely no 'overhang' there either. But it might be that all the possibly worthwhile projects are not yet listed on the 'league table' (whether explicitly or implicitly). 

Commonly in health economics and prioritisation (eg New Zealand's Pharmaceutical Management Agency) you calculate the cost-effectiveness (eg cost per QALY) for a given medication, and then rank the desired medications from most to least cost-effective. You then take the budget, and distribute the funds from top until they run out. This is where your rule the line (bar). Nothing below gets funded unless more budget is allocated. If there are items below the bar worth doing then there is a funding constraint, if everything has been funded and there are leftover funds then there is a funding overhang. So it depends on how long the list of cost-effective desirable projects is as to whether there is a shortfall, right amount, or overhang, and that depends on people thinking up projects and adding them to the list. An 'overhang' probably stimulates more creativity and thought on potential projects. 

Yes, that's true for an individual. Sorry, I was more meaning the 'today' infographic would be for a person born in say 2002, and the 2050 one for someone born in eg 2030.  Some confusion because I was replying about 'medical infographic for x-risks' generally rather than specifically your point about personal risk. 

The infographic could perhaps have a 'today' and a 'in 2050' version, with the bubbles representing the risks being very small for AI 'today' compared to eg suicide, or cancer or heart disease, but then becoming much bigger in the 2050 version, illustrating the trajectory. Perhaps the standard medical cause of death bubbles shrink by 2050 illustrating medical progress. 

We can quibble over the numbers but I think the point here is basically right, and if not right for AI then probably right for biorisk or some other risks. That point being even if you only look at probabilities in the next few years and only care about people alive today, then these issues appear to be the most salient policy areas. I've noted in a recent draft that the velocity of increase in risk (eg from some 0.0001% risk this year, to eg 10% per year in 50 years) results in issues with such probability trajectories being invisible to eg 2-year national risk assessments at present even though area under curve is greater in aggregate than every other risk. But in a sense potentially 'inevitable' (for the demonstration risk profiles I dreamed up) over a human lifetime. This then begs the question of how to monitor the trajectory (surely this is one role of national risk assessment, to invest in 'fire alarms', but this then requires these risks to be included in the assessment so the monitoring can be prioritized).  Persuading policymakers is definitely going to be easier by leveraging decade long actuarial tables than having esoteric discussions about total utilitarianism. 

Additionally, in the recent FLI 'World Building Contest' the winning entry from Mako Yass made quite a point of the fact that in the world he built the impetus for AI safety and global cooperation on this issue came from the development of very clear and very specific scenario development of how exactly AI could come to kill everyone. This is analogous to Carl Sagan/Turco's work on nuclear winter in the early 1980s , a specific picture changed minds. We need this for AI. 

Load More