All of Matt Boyd's Comments + Replies

Islands, nuclear winter, and trade disruption as a human existential risk factor

Hi Christian, thanks for your thoughts. You're right to note that islands like Iceland, Indonesia, NZ, etc are also where there's a lot of volcanic activity. Mike Cassidy and Lara Mani briefly summarize potential ash damage in their post on supervolcanoes here (see the table on effects). Basically there could be severe impacts on agriculture and infrastructure. I think the main lesson is that at least two prepared islands would be good. In different hemispheres. That first line of redundancy is probably the most important (also in case one is a target in n... (read more)

1christian.r8d
Thanks, Matt! That makes sense
Let's stop saying 'funding overhang'

That's true in theory. But in practice there are only a (small) finite number of items on the list (those that have been formally investigated with a cost-effectiveness analysis). So once those are all funded, then it would make sense to fund more cost-effectiveness analyses to grow the table.  We don't know how 'worthwhile' it is to fund most things, so they are not on the table. 

Let's stop saying 'funding overhang'

Yes, absolutely, and in almost all cases in health the list of desirable things outstrips the funding bar. The 'league table' of interventions is longer than the fraction of them that are/can be funded. So in health there is basically never an overhang. The same will be true for EA/GCR/x-risk projects too. So I agree there is likely no 'overhang' there either. But it might be that all the possibly worthwhile projects are not yet listed on the 'league table' (whether explicitly or implicitly). 

2Benjamin_Todd1mo
I don't think there can ever be an overhang in that sense, since the point at which something is 'worthwhile' or not is arbitrary. Let's say all the worthwhile interventions produce 10 utils per dollar. If you have enough money to cover all of those, now you can fund everything that produces 1 or more util per dollar. Then after that you can fund everything that produces 0.1 util per dollar, then 0.01, and so on for ever.
Let's stop saying 'funding overhang'

Commonly in health economics and prioritisation (eg New Zealand's Pharmaceutical Management Agency) you calculate the cost-effectiveness (eg cost per QALY) for a given medication, and then rank the desired medications from most to least cost-effective. You then take the budget, and distribute the funds from top until they run out. This is where your rule the line (bar). Nothing below gets funded unless more budget is allocated. If there are items below the bar worth doing then there is a funding constraint, if everything has been funded and there are lefto... (read more)

2Benjamin_Todd1mo
But what determines what is 'worth doing' in those cases? You should be able to find ways to keep improving health, just at ever decreasing levels of cost-effectiveness.
My Most Likely Reason to Die Young is AI X-Risk

Yes, that's true for an individual. Sorry, I was more meaning the 'today' infographic would be for a person born in say 2002, and the 2050 one for someone born in eg 2030.  Some confusion because I was replying about 'medical infographic for x-risks' generally rather than specifically your point about personal risk. 

My Most Likely Reason to Die Young is AI X-Risk

The infographic could perhaps have a 'today' and a 'in 2050' version, with the bubbles representing the risks being very small for AI 'today' compared to eg suicide, or cancer or heart disease, but then becoming much bigger in the 2050 version, illustrating the trajectory. Perhaps the standard medical cause of death bubbles shrink by 2050 illustrating medical progress. 

1AISafetyIsNotLongtermist1mo
I think the probability of death would go significantly up with age, undercutting the effect of this.
My Most Likely Reason to Die Young is AI X-Risk

We can quibble over the numbers but I think the point here is basically right, and if not right for AI then probably right for biorisk or some other risks. That point being even if you only look at probabilities in the next few years and only care about people alive today, then these issues appear to be the most salient policy areas. I've noted in a recent draft that the velocity of increase in risk (eg from some 0.0001% risk this year, to eg 10% per year in 50 years) results in issues with such probability trajectories being invisible to eg 2-year nationa... (read more)

3Guy Raveh1mo
This can be taken further - if your main priority is people alive today (or yourself) - near term catastrophic risks that aren't x-risks become as important. So, for example, while it may be improbable for a pandemic to kill everyone, I think it's much more probable that one kills, say, at least 90% of people. On the other hand I'm not sure the increase in probability from AI killing everyone to AI killing at least 90% of people is that big. Then again, AI can be misused much worse than other things. So maybe the chance that it doesn't kill me but still, for example, lets a totalitarian government enslave me [https://forum.effectivealtruism.org/posts/hJDid3goqqRAE6hFN/my-most-likely-reason-to-die-young-is-ai-x-risk?commentId=NNPrgYytuFrDZnNtM] , is pretty big?
Some clarifications on the Future Fund's approach to grantmaking

Thanks Nick, interesting thoughts, great to see this discussion, and appreciated. Is there a timeline for when the initial (21 March deadline) applications will all be decided? As you say, it takes as long as it takes, but has some implications for prioritising tasks (eg  deciding whether to commit to less impactful, less-scalable work being offered, and the opportunity costs of this). Is there a list of successful applications? 

About 99% of applicants have received a decision at this point. The remaining 1% have received updates on when they should expect to hear from us next. Some of these require back-and-forth with the applicant and we can't unilaterally conclude the process with all the info we need. And in some of these cases the ball is currently in our court.

We will be reporting on the open call more systematically in our progress update which we publish in a month or so.

3Kirsten3mo
What I've heard from friends is that everyone's heard back now (either a decision, or an email update saying why a decision might take longer in their case). If you haven't heard anything I'd definitely recommend emailing the fund to follow up. I've known a couple of people who have needed to do this.
Identifying the most pressing global problems for an Australian policy context

Rumtin, I think Jack is absolutely right, and our research, in the process of being written up will argue Australia is the most likely successful persisting hub of complexity in a range of nuclear war scenarios. We include a detailed case study of New Zealand (because of familiarity with the issues) but a detailed case study of Australia is begging to be done. There are key issues (mostly focused around trade, energy forms, societal cohesion, infectious disease resilience, awareness of the main risks - not 'radiation' like many public think, and for Austra... (read more)

1rumtin3mo
Thanks Matt! Yes, would be very keen to see the paper. We had definitely not factored in enough the resilience side of Australia's role on nuclear issues into our scoring. We'll be sure to include it as part of the more detailed scan of each of these policy issues. Your paper will be a really useful guide for that work.
Release of Existential Risk Research database

Thanks Rumtin for this, it's a fantastic resource. One thing I note though is that some of the author listings are out of order (this is actually a problem in Terra's CSVs too where I think maybe some of the content in your database is imported from). For example, item 70 by 'Tang' (who is indeed an author) is actually first-authored by 'Wagman' as per the link. I had this problem using Terra, where I kept thinking I was finding papers I'd previously missed, only to discover they were the same paper but with authors in a different order. Maybe at some point a verification/QC process could be implemented (in both these databases, Terra too, to clean them up a little). Great work! 

2rumtin4mo
Thanks Matt! I hadn't realised that. Yes, I pulled straight from Terra for many of the publications, so the author order will appear the same. To be honest, not a priority for me to rectify at this stage. With Terra being updated, probably not worth spending too much time cleaning this up (e.g. I've also noticed sometimes names or titles appear in all caps, or inconsistent use of first name vs initials). At this stage, I'm thinking minor updates and changes, such as adding a coloumn for specific risk (in addition to the risk category), expanding to other years, and expanding to other x-risk related organisations.
Help us make civilizational refuges happen

Bunker on island is probably a robust set-up, at least two given volcanic nature of eg Iceland, New Zealand: https://adaptresearchwriting.com/island-refuges/ Synergies/complementarities in island and bunker work should be explored. We're currently exploring the islands/nuclear winter strand (EA LTFF), and have put in for FTX too. 

2Linch2mo
Thanks for the tip!
Best Countries during Nuclear War

In a previous project we used the UN FAO food Pocketbook, although I think the way they compile data changed after 2012. We used the 'kcal production per capita' metric, from here: https://www.fao.org/publications/card/en/c/a9f447e8-6798-5e82-82b0-a78724bfff03/ 

You can see what we did in the following two papers:

https://pubmed.ncbi.nlm.nih.gov/33886124/

https://onlinelibrary.wiley.com/doi/abs/10.1111/risa.13398

There are FAO CSVs for more recent years available to download here: https://www.fao.org/faostat/en/#data/FBS 

That's one suggestion. 

Modelling the odds of recovery from civilizational collapse

Did you ever start/do this project, as per your linked G-doc?

2MichaelA5mo
No, I didn't - I ended up getting hired by Rethink Priorities and doing work on nuclear risk instead, among other things.
Best Countries during Nuclear War

Hi, I have quite a lot to say about this, but I'm actually currently writing a research paper on exactly this issue, and will write a full forum post/link-post once it's completed (ETA June-ish). However, a couple of key observations:

  1. Cost of living is likely to be irrelevant in nuclear aftermath as global finance and economics is in tatters (the value of assets will jump around unpredictably, eg mansions less important than electric vehicles if global oil trade ceases), prices will change dramatically according to scarcity, eg food prices. 
  2. Energy inde
... (read more)
1AndreFerretti5mo
Thank you so much for the amazing reply! I increased the weight of energy security. I don't like the Global Food Security Index, because it's about the quality of food, not whether the country is producing/exporting food. Which other indicator would you use, and where do I get the data?
Mitigating x-risk through modularity

'Partitioning' is another concept that might be useful. 

Islands as refuge (basically same idea as the city idea above), this paper specifically mentions pandemic as threat and island as solution (ie risk first approach) and also considers nuclear (and other) winter scenarios too (see the Supplementary material): https://pubmed.ncbi.nlm.nih.gov/33886124/ 

I note Alexey's comment here too, broadly agree with his islands/refuge thinking. 

The literature on group selection and species selection in biology might prove useful. You seem to be on to it tangentially with the butterfly example. 

The Unweaving of a Beautiful Thing

I enjoyed this. Would seem to do well as an argument for preventing existential risk from Scheffler's 'the human project' point of view, ie the continuation of transgenerational undertakings that we each contribute a tiny piece to, as opposed to the maximizing total utility approach. Persistence of the whole seems to have emergent merit beyond the lives of the individuals. 

On the other hand it also made me think of the line Chigurh says in 'No Country for Old Men' > "If the rule that you followed brought you to this, of what use was the rule?" Rule = eg not eating meat, being compassionate etc. [note, I believe there IS use in the rules, but the line still haunts me] 

Democratising Risk - or how EA deals with critics

Thanks Carla and Luke for a great paper. This is exactly the sort of antagonism that those not so deeply immersed in the xrisk literature can benefit from, because it surveys so much and highlights the dangers of a single core framework. Alternatives to the often esoteric and quasi-religious far-future speculations that seem to drive a lot of xrisk work are not always obvious to decision makers and that gap means that the field can be ignored as 'far fetched'. Democratisation is a critical component (along with apoliticisation). 

I must say that it was... (read more)

[Creative Writing Contest] The Sequence Matters

Thanks for these comments Noumero, much appreciated!

Carl Shulman on the common-sense case for existential risk work and its practical implications

I really liked this episode, because of Carl's no nonsense moderate approach. Though I must say that I'm a bit surprised that it appears that some in the EA community see the 'commonsense argument' as some kind of revelation. See for example the 80,000 email newsletter that comes via Benjamin Todd ("Why reducing existential risk should be a top priority, even if you don’t attach any value to future generations", 16 Oct, 2021).  I think this argument is just obvious, and is easily demonstrated through relatively simple life-year or QALY calculations. I... (read more)

I liked this comment. 

Another way to see it is that there are two different sorts of arguments for prioritising existential risk reduction - an empirical argument (the risk is large) and a philosophical/ethical argument (even small risks are hugely harmful in expectation, because of the implications for future generations). (Of course this is a bit schematic, but I think the distinction may still be useful.)

I guess the fact that EA is a quite philosophical movement may be a reason why there's been a substantial (but by no means exclusive) focus on the... (read more)

Major UN report discusses existential risk and future generations (summary)

I am also surprised that there are few comments here. Given the long and detailed technical quibbles that often append many of the rather esoteric EA posts it surprises me that where there is an opportunity to shape tangible influences at a global scale there is silence. I feel that there are often gaps in the EA community in the places that would connect research and insight with policy and governance. 

Sean is right, there has been accumulating interest in this space. Our paper on the UN and existential risks in 'Risk Analysis' (2020) was awarded 'be... (read more)

A Sequence Against Strong Longtermism

Thanks for collating all of this here in one place. I should have read the later posts before I replied to the first one. Thank you too for your bold challenge. I feel like Kant waking from his 'dogmatic slumber'. A few thoughts:

  1. Humanity is an 'interactive kind' (to use Hacking's term). Thinking about humanity can change humanity, and the human future.
  2. Therefore, Ord's 'Long Reflection' could lead to there being no future humans at all (if that was the course that the Long Reflection concluded). 
  3. This simple example shows that we cannot quantify over fu
... (read more)
6Linch1y
Hmm, I think 3 does not follow from 2. If I think there's a 10% chance I will quit my job upon further reflection, and I do the reflection, and then quit my job, this does not mean that before the reflection I cannot make any quantified statements about the expected earnings from my job.
A case against strong longtermism

Hi Vaden, 

I'm a bit late to the party here, I know. But I really enjoyed this post. I thought I'd add my two cents worth. Although I have a long term perspective on risk and mitigation, and have long term sympathies, I don't consider myself a strong longtermist. That said, I wouldn't like to see anyone (eg from policy circles) walk away from this debate with the view that it is not worth investing resources in existential risk mitigation. I'm not saying that's what necessarily comes through, but I think there is important middle ground (and this middl... (read more)

Is SARS-CoV-2 a modern Greek Tragedy?

Thanks for this response. I guess the motivation for me writing this yesterday was a comment from a member of NZ's public sector, who said basically 'the Atomic Scientists article falls afoul of the principle of parsimony'. So I wanted to give the other side, ie there actually are some reasons to think lab-leak rather than parsimonious natural explanation. So I completely take your point about balance, but the idea is part of a dialogue rather than a comprehensive analysis, that could have been clearer. Cheers. 

Is SARS-CoV-2 a modern Greek Tragedy?

Thanks for these. Super interesting credences here, 19% (that health organisations will conclude lab origin) to 83% (that gain of function was in fact contributory). I guess the strikingly wide range suggests genuine uncertainty. Watch this space with interest. 

Are Humans 'Human Compatible'?

Great additional detail, thanks!

Eight high-level uncertainties about global catastrophic and existential risk

Another one to consider, assuming you see it at the same level of analysis as the 8 above, is the spatial trajectory through which the catastrophe unfolds. E.g. a pandemic will spread from an origin(s) and I'm guessing is statistically likely to impact certain well-connected regions of the world first. Or a lethal command to a robot army will radiate outward from the storage facility for the army. Or nuclear winter will impact certain regions sooner than others. Or Ecological collapse due to an unstoppable biological novelty will devour certain kinds ... (read more)

7SiebeRozendal3y
Hey Matt, good points! This all relates to what Avin et al. [https://www.researchgate.net/publication/323373466_Classifying_Global_Catastrophic_Risks] call the spread mechanism of global catastrophic risk. If you haven't read it already, I'm sure you'll like their paper! For some of these we actually do have an inkling of knowledge though! Nuclear winter is more likely to affect the northern hemisphere given that practically every nuclear target is located in the northern hemisphere. And it's my impression that in biosecurity geographical containment is a big issue: an extra case in the same location is much less threatening than an extra case in a new country. As a result there are border checks for a hazardous disease at borders where one might expect a disease (e.g. currently the borders with the Democratic Repbulic of the Congo).