All of Matt Boyd's Comments + Replies

Hopefully everyone who thinks that AI is the most pressing issue takes the time to write (or collaborate and write) their best solution in 2000 words and submit to the UN's recent consultation call: https://dig.watch/updates/invitation-for-paper-submissions-on-worldwide-ai-governance A chance to put AI in the same global governance basket as biological and nuclear weapons. And potential high leverage from a relatively small task (Deadline 30 Sept). 

Difficult to interpret a lot of this as it seems to be a debate between potentially biased pacifists, and potentially biased military blogger. As with many disagreements the truth is likely in the middle somewhere (as Rodriguez noted). Need new independent studies on this that are divorced from the existing pedigrees. That said, much of the catastrophic risk from nuclear war may be in the more than likely catastrophic trade disruptions, which alone could lead to famines, given that nearly 2/3 of countries are net food importers, and almost no one makes their own liquid fuel to run their agricultural equipment. 

2
Vasco Grilo
8mo
Agreed, Matt! Makes sense. I suppose getting a handle of the climatic effects is mostly relevant for assessing existential risk. Assuming the climatic effects are negligible, my guess is that the probability of extinction until 2100 given a global thermonuclear war without other weapons of mass destruction (namely bio or AI weapons) is less than 10^-5.

Thanks for this post. Reducing risks of great power war is important, but also consider reducing risks from great power war. In particular working on how non-combatant nations can ensure their societies survive the potentially catastrophic ensuing effects on trade/food/fuel etc. Disadvantages of this approach are that it does not prevent the massive global harms in the first place, advantages are that building resilience of eg relatively self-sufficient island refuges may also reduce existential risk from other causes (bio-threats, nuclear war/winter, cata... (read more)

100% agree regarding catastrophe risk. This is where I think advocacy resources should be focused. Governments and people care about catastrophe as you say, even 1% would be an immense tragedy. And if we spell out how exactly (one or three or ten examples) of how AI development leads to a 1% catastrophe then this can be the impetus for serious institution-building, global cooperation, regulations, research funding, public discussion of AI risk. And packaged within all that activity can be resources for x-risk work. Focusing on x-risk alienates too many peo... (read more)

Hi Steven, thanks for what I consider a very good post. I was extremely frustrated with this debate for many of the reasons you articulate. I felt that the affirmative side really failed to concretely articulate the x-risk concerns in a way that was clear and intuitive to the audience (people we need good clear scenarios of how exactly step by step this happens!). Despite years (decades!) of good research and debate on this (including in the present Forum) the words coming out of x-risk proponents mouths still seem to be 'exponential curve, panic panic, [w... (read more)

4
Steven Byrnes
10mo
Thanks! Hmm, depending on what you mean by “this”, I think there are some tricky communication issues that come up here, see for example this Rob Miles video. On top of that, obviously this kind of debate format is generally terrible for communicating anything of substance and nuance. Melanie is definitely aware of things like orthogonality thesis etc.—you can read her Quanta Magazine article for example. Here’s a twitter thread where I was talking with her about it.

More recent works than those cited above: 

Famine after a range of nuclear winter scenarios (Xia et al 2022, Nature Food): https://www.nature.com/articles/s43016-022-00573-0

Resilient foods to mitigate likely famines (Rivers et al 2022, preprint): https://www.researchsquare.com/article/rs-1446444/v1 

Likelihood of New Zealand collapse (Boyd & Wilson 2022, Risk Analysis): https://onlinelibrary.wiley.com/doi/10.1111/risa.14072

New Zealand agricultural production post-nuclear winter (Wilson et al 2022, in press): https://www.medrxiv.org/content/10.1... (read more)

Thanks for this great post mapping out the problem space! I'd add that trade disruption appears to be one of the most significant impacts of nuclear war, and plausibly amplifies the 'famine' aspect of nuclear winter significantly and a range of potential civilisation collapse risk factors, see my earlier post here: https://forum.effectivealtruism.org/posts/7arEfmLBX2donjJyn/islands-nuclear-winter-and-trade-disruption-as-a-human Trade disruption disappears into the 'various risk factor mechanisms' category above, but I think it's worth more consideration. H... (read more)

Thanks. I guess this relates to your point about democratically acceptable decisions of governments. If a government is choosing to neglect something (eg because its probability is low, or because they have political motivations for doing so, vested interests etc), then they should only do so if they have information suggesting the electorate has/would authorize this. Otherwise it is an undemocratic decision. 

Thanks for this, great paper. 

  1. I 100% agree on the point that longtermism is not a necessary argument to achieve investment in existential/GCR risk reduction (and indeed might be a distraction). We have recently published on this (here). The paper focuses on the process of National Risk Assessment (NRA). We argue: "If one takes standard government cost-effectiveness analysis (CEA) as the starting point, especially the domain of healthcare where cost-per-quality-adjusted-life-year is typically the currency and discount rates of around 3% are typically u
... (read more)
1
EJT
1y
Thanks for the tip! Looking forward to reading your paper. What do you mean by this?

We transform ourselves all the time, and very powerfully. The entire field of cognitive niche construction is dedicated to studying how the things we create/build/invent/change lead to developmental scaffolding and new cognitive abilities that previous generations did not have. Language, writing systems, education systems, religions, syllabi, external cognitive supports, all these things have powerfully transformed human thought and intelligence. And once they were underway the take-off speed of this evolutionary transformation was very rapid (compared to the 200,000 years spent being anatomically modern with comparatively little change). 

2
Geoffrey Miller
1y
Matt -- good point. Also, humans cognitively enhance ourselves through nootropics such as nicotine and caffeine. These might seem mild at the individual level, but I suspect that at the collective level, they may have helped spark the Enlightenment, the Scientific Revolution, and the Industrial Revolution (as Michael Pollan has argued). And, on a longer time-scale, we've shaped the course of our own genetic evolution through the mate choices we make, about who to combine our genes with. (Something first noticed by Darwin, 1871).

Yes, feel free to translate whatever you like. And ahh, I'm a bit selective about what I post on here. It's just they way I've decided to curate things. I don't mind people linking to it though. 

The GCRMA was included in the the final National Defense Authorization Act for FY2023 which became law in December 2022. The text is altered a little from the draft version, but can be read here: https://www.congress.gov/117/bills/hr7776/BILLS-117hr7776enr.pdf#page=1290  I have blogged about it here: https://adaptresearchwriting.com/2023/02/05/us-takes-action-to-avert-human-existential-catastrophe-the-global-catastrophic-risk-management-act-2022/ Not sure why there isn't much discussion about it. It seems like something every country could replicate, ... (read more)

1
Anthony Fleming
1y
That's awesome! Thanks for the update!
2
Ramiro
1y
Thanks. Great post, btw. May I translate a part of it? and why don't you post it here on EA forum?
3
MaxRa
1y
Thanks for the update! Also surprised I haven't seen more discussion.

Hi Ross, here's the paper that I mentioned in my comment above (this pre-print uses some data from Xia et al 2022 in its preprint form, and their paper has just been published in Nature Food with some slightly updated numbers, so we'll update our own once the peer review comes back, but the conclusions etc won't change): https://www.researchsquare.com/article/rs-1927222/v1

We're now starting a 'NZ Catastrophe Resilience Project' to more fully work up the skeleton details that are listed in Supplementary Table S1 of our paper. Engaging with public sector, in... (read more)

I generally think that all these kinds of cost-effectiveness analyses around x-risk are wildly speculative and susceptible to small changes in assumptions. There is literally no evidence that the $250b would change bio-x-risk by 1% rather than, say, 0.1% or 10%, or even 50%, depending on how it was targeted and what developments it led to. On the other hand if you do successfully reduce the x-risk by, say, 1%, then you most likely also reduce the risk/consequences of all kinds of other non-existential bio-risks, again depending on the actual investment/dis... (read more)

Hi Christian, thanks for your thoughts. You're right to note that islands like Iceland, Indonesia, NZ, etc are also where there's a lot of volcanic activity. Mike Cassidy and Lara Mani briefly summarize potential ash damage in their post on supervolcanoes here (see the table on effects). Basically there could be severe impacts on agriculture and infrastructure. I think the main lesson is that at least two prepared islands would be good. In different hemispheres. That first line of redundancy is probably the most important (also in case one is a target in n... (read more)

1
christian.r
2y
Thanks, Matt! That makes sense

That's true in theory. But in practice there are only a (small) finite number of items on the list (those that have been formally investigated with a cost-effectiveness analysis). So once those are all funded, then it would make sense to fund more cost-effectiveness analyses to grow the table.  We don't know how 'worthwhile' it is to fund most things, so they are not on the table. 

Yes, absolutely, and in almost all cases in health the list of desirable things outstrips the funding bar. The 'league table' of interventions is longer than the fraction of them that are/can be funded. So in health there is basically never an overhang. The same will be true for EA/GCR/x-risk projects too. So I agree there is likely no 'overhang' there either. But it might be that all the possibly worthwhile projects are not yet listed on the 'league table' (whether explicitly or implicitly). 

2
Benjamin_Todd
2y
I don't think there can ever be an overhang in that sense, since the point at which something is 'worthwhile' or not is arbitrary. Let's say all the worthwhile interventions produce 10 utils per dollar. If you have enough money to cover all of those, now you can fund everything that produces 1 or more util per dollar. Then after that you can fund everything that produces 0.1 util per dollar, then 0.01, and so on for ever.

Commonly in health economics and prioritisation (eg New Zealand's Pharmaceutical Management Agency) you calculate the cost-effectiveness (eg cost per QALY) for a given medication, and then rank the desired medications from most to least cost-effective. You then take the budget, and distribute the funds from top until they run out. This is where your rule the line (bar). Nothing below gets funded unless more budget is allocated. If there are items below the bar worth doing then there is a funding constraint, if everything has been funded and there are lefto... (read more)

2
Benjamin_Todd
2y
But what determines what is 'worth doing' in those cases? You should be able to find ways to keep improving health, just at ever decreasing levels of cost-effectiveness.

Yes, that's true for an individual. Sorry, I was more meaning the 'today' infographic would be for a person born in say 2002, and the 2050 one for someone born in eg 2030.  Some confusion because I was replying about 'medical infographic for x-risks' generally rather than specifically your point about personal risk. 

The infographic could perhaps have a 'today' and a 'in 2050' version, with the bubbles representing the risks being very small for AI 'today' compared to eg suicide, or cancer or heart disease, but then becoming much bigger in the 2050 version, illustrating the trajectory. Perhaps the standard medical cause of death bubbles shrink by 2050 illustrating medical progress. 

1
AISafetyIsNotLongtermist
2y
I think the probability of death would go significantly up with age, undercutting the effect of this.

We can quibble over the numbers but I think the point here is basically right, and if not right for AI then probably right for biorisk or some other risks. That point being even if you only look at probabilities in the next few years and only care about people alive today, then these issues appear to be the most salient policy areas. I've noted in a recent draft that the velocity of increase in risk (eg from some 0.0001% risk this year, to eg 10% per year in 50 years) results in issues with such probability trajectories being invisible to eg 2-year nationa... (read more)

3
Guy Raveh
2y
This can be taken further - if your main priority is people alive today (or yourself) - near term catastrophic risks that aren't x-risks become as important. So, for example, while it may be improbable for a pandemic to kill everyone, I think it's much more probable that one kills, say, at least 90% of people. On the other hand I'm not sure the increase in probability from AI killing everyone to AI killing at least 90% of people is that big. Then again, AI can be misused much worse than other things. So maybe the chance that it doesn't kill me but still, for example, lets a totalitarian government enslave me, is pretty big?

Thanks Nick, interesting thoughts, great to see this discussion, and appreciated. Is there a timeline for when the initial (21 March deadline) applications will all be decided? As you say, it takes as long as it takes, but has some implications for prioritising tasks (eg  deciding whether to commit to less impactful, less-scalable work being offered, and the opportunity costs of this). Is there a list of successful applications? 

About 99% of applicants have received a decision at this point. The remaining 1% have received updates on when they should expect to hear from us next. Some of these require back-and-forth with the applicant and we can't unilaterally conclude the process with all the info we need. And in some of these cases the ball is currently in our court.

We will be reporting on the open call more systematically in our progress update which we publish in a month or so.

3
Kirsten
2y
What I've heard from friends is that everyone's heard back now (either a decision, or an email update saying why a decision might take longer in their case). If you haven't heard anything I'd definitely recommend emailing the fund to follow up. I've known a couple of people who have needed to do this.

Rumtin, I think Jack is absolutely right, and our research, in the process of being written up will argue Australia is the most likely successful persisting hub of complexity in a range of nuclear war scenarios. We include a detailed case study of New Zealand (because of familiarity with the issues) but a detailed case study of Australia is begging to be done. There are key issues (mostly focused around trade, energy forms, societal cohesion, infectious disease resilience, awareness of the main risks - not 'radiation' like many public think, and for Austra... (read more)

2
Ross_Tieman
2y
I am a bit late to the party here but I agree that Australia is uniquely well positioned to have an impact on nuclear through increasing it's resilience and warrants its own case study. The Island state refuge concept was discussed at EAGx Australia as a potential moonshot project. On top of this major new industries relevant to food security in an Abrupt Sunlight Reduction Scenario such as Macroalgae (See seaweed blueprint  and Marine Bioproducts Cooperative Research Council) are being set up and scaling so there may be potential to influence the industry towards resilience / response through policy mechanisms e.g. minimizing tight environmental controls for seaweed farming in an ASRS.  If you end up going ahead with the Australia case study I am happy to help on the assessment of food security and infrastructure resilience aspect.  

Updates would be fantastic. 

Thanks Rumtin for this, it's a fantastic resource. One thing I note though is that some of the author listings are out of order (this is actually a problem in Terra's CSVs too where I think maybe some of the content in your database is imported from). For example, item 70 by 'Tang' (who is indeed an author) is actually first-authored by 'Wagman' as per the link. I had this problem using Terra, where I kept thinking I was finding papers I'd previously missed, only to discover they were the same paper but with authors in a different order. Maybe at some point a verification/QC process could be implemented (in both these databases, Terra too, to clean them up a little). Great work! 

Bunker on island is probably a robust set-up, at least two given volcanic nature of eg Iceland, New Zealand: https://adaptresearchwriting.com/island-refuges/ Synergies/complementarities in island and bunker work should be explored. We're currently exploring the islands/nuclear winter strand (EA LTFF), and have put in for FTX too. 

2
Linch
2y
Thanks for the tip!

In a previous project we used the UN FAO food Pocketbook, although I think the way they compile data changed after 2012. We used the 'kcal production per capita' metric, from here: https://www.fao.org/publications/card/en/c/a9f447e8-6798-5e82-82b0-a78724bfff03/ 

You can see what we did in the following two papers:

https://pubmed.ncbi.nlm.nih.gov/33886124/

https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.13398

There are FAO CSVs for more recent years available to download here: https://www.fao.org/faostat/en/#data/FBS 

That's one suggestion. 

Did you ever start/do this project, as per your linked G-doc?

2
MichaelA
2y
No, I didn't - I ended up getting hired by Rethink Priorities and doing work on nuclear risk instead, among other things.

Hi, I have quite a lot to say about this, but I'm actually currently writing a research paper on exactly this issue, and will write a full forum post/link-post once it's completed (ETA June-ish). However, a couple of key observations:

  1. Cost of living is likely to be irrelevant in nuclear aftermath as global finance and economics is in tatters (the value of assets will jump around unpredictably, eg mansions less important than electric vehicles if global oil trade ceases), prices will change dramatically according to scarcity, eg food prices. 
  2. Energy inde
... (read more)
3
Ramiro
7mo
Come to Brazil. We can make room for +1bi individuals, easy. With nuclear winter, we may even manage to get some ski resorts ;) (Ofc if we don't start a war w Argentina. That's the problem w South America)
1
AndreFerretti
2y
Thank you so much for the amazing reply! I increased the weight of energy security. I don't like the Global Food Security Index, because it's about the quality of food, not whether the country is producing/exporting food. Which other indicator would you use, and where do I get the data?

'Partitioning' is another concept that might be useful. 

Islands as refuge (basically same idea as the city idea above), this paper specifically mentions pandemic as threat and island as solution (ie risk first approach) and also considers nuclear (and other) winter scenarios too (see the Supplementary material): https://pubmed.ncbi.nlm.nih.gov/33886124/ 

I note Alexey's comment here too, broadly agree with his islands/refuge thinking. 

The literature on group selection and species selection in biology might prove useful. You seem to be on to it tangentially with the butterfly example. 

I enjoyed this. Would seem to do well as an argument for preventing existential risk from Scheffler's 'the human project' point of view, ie the continuation of transgenerational undertakings that we each contribute a tiny piece to, as opposed to the maximizing total utility approach. Persistence of the whole seems to have emergent merit beyond the lives of the individuals. 

On the other hand it also made me think of the line Chigurh says in 'No Country for Old Men' > "If the rule that you followed brought you to this, of what use was the rule?" Rule = eg not eating meat, being compassionate etc. [note, I believe there IS use in the rules, but the line still haunts me] 

Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as 'far fetched'. Democratisation is a critical component (along with apoliticisation). 

I must say that it was... (read more)

Thanks for these comments Noumero, much appreciated!

I really liked this episode, because of Carl's no nonsense moderate approach. Though I must say that I'm a bit surprised that it appears that some in the EA community see the 'commonsense argument' as some kind of revelation. See for example the 80,000 email newsletter that comes via Benjamin Todd ("Why reducing existential risk should be a top priority, even if you don’t attach any value to future generations", 16 Oct, 2021).  I think this argument is just obvious, and is easily demonstrated through relatively simple life-year or QALY calculations. I... (read more)

I liked this comment. 

Another way to see it is that there are two different sorts of arguments for prioritising existential risk reduction - an empirical argument (the risk is large) and a philosophical/ethical argument (even small risks are hugely harmful in expectation, because of the implications for future generations). (Of course this is a bit schematic, but I think the distinction may still be useful.)

I guess the fact that EA is a quite philosophical movement may be a reason why there's been a substantial (but by no means exclusive) focus on the... (read more)

I am also surprised that there are few comments here. Given the long and detailed technical quibbles that often append many of the rather esoteric EA posts it surprises me that where there is an opportunity to shape tangible influences at a global scale there is silence. I feel that there are often gaps in the EA community in the places that would connect research and insight with policy and governance. 

Sean is right, there has been accumulating interest in this space. Our paper on the UN and existential risks in 'Risk Analysis' (2020) was awarded 'be... (read more)

Thanks for collating all of this here in one place. I should have read the later posts before I replied to the first one. Thank you too for your bold challenge. I feel like Kant waking from his 'dogmatic slumber'. A few thoughts:

  1. Humanity is an 'interactive kind' (to use Hacking's term). Thinking about humanity can change humanity, and the human future.
  2. Therefore, Ord's 'Long Reflection' could lead to there being no future humans at all (if that was the course that the Long Reflection concluded). 
  3. This simple example shows that we cannot quantify over fu
... (read more)
6
Linch
3y
Hmm, I think 3 does not follow from 2.  If I think there's a 10% chance I will quit my job upon further reflection, and I do the reflection, and then quit my job, this does not mean that before the reflection I cannot make any quantified statements about the expected earnings from my job.

Hi Vaden, 

I'm a bit late to the party here, I know. But I really enjoyed this post. I thought I'd add my two cents worth. Although I have a long term perspective on risk and mitigation, and have long term sympathies, I don't consider myself a strong longtermist. That said, I wouldn't like to see anyone (eg from policy circles) walk away from this debate with the view that it is not worth investing resources in existential risk mitigation. I'm not saying that's what necessarily comes through, but I think there is important middle ground (and this middl... (read more)

Thanks for this response. I guess the motivation for me writing this yesterday was a comment from a member of NZ's public sector, who said basically 'the Atomic Scientists article falls afoul of the principle of parsimony'. So I wanted to give the other side, ie there actually are some reasons to think lab-leak rather than parsimonious natural explanation. So I completely take your point about balance, but the idea is part of a dialogue rather than a comprehensive analysis, that could have been clearer. Cheers. 

Thanks for these. Super interesting credences here, 19% (that health organisations will conclude lab origin) to 83% (that gain of function was in fact contributory). I guess the strikingly wide range suggests genuine uncertainty. Watch this space with interest. 

Great additional detail, thanks!

Another one to consider, assuming you see it at the same level of analysis as the 8 above, is the spatial trajectory through which the catastrophe unfolds. E.g. a pandemic will spread from an origin(s) and I'm guessing is statistically likely to impact certain well-connected regions of the world first. Or a lethal command to a robot army will radiate outward from the storage facility for the army. Or nuclear winter will impact certain regions sooner than others. Or Ecological collapse due to an unstoppable biological novelty will devour certain kinds ... (read more)

7
SiebeRozendal
4y
Hey Matt, good points! This all relates to what Avin et al. call the spread mechanism of global catastrophic risk. If you haven't read it already, I'm sure you'll like their paper! For some of these we actually do have an inkling of knowledge though! Nuclear winter is more likely to affect the northern hemisphere given that practically every nuclear target is located in the northern hemisphere. And it's my impression that in biosecurity geographical containment is a big issue: an extra case in the same location is much less threatening than an extra case in a new country. As a result there are border checks for a hazardous disease at borders where one might expect a disease (e.g. currently the borders with the Democratic Repbulic of the Congo).