Currently Research Director at Founders Pledge, but posts and comments represent my own opinions, not FP’s, unless otherwise noted.
I worked previously as a data scientist and as a journalist.
My point is precisely that you should not assume any view. My position is that the uncertainties here are significant enough to warrant some attention to nuclear war as a potential extinction risk, rather than to simply bat away these concerns on first principles and questionable empirics.
Where extinction risk is concerned, it is potentially very costly to conclude on little evidence that something is not an extinction risk. We do need to prioritize, so I would not for instance propose treating bad zoning laws as an X-risk simply because we can't demonstrate conclusively that they won't lead to extinction. Luckily there are very few things that could kill very large numbers of people, and nuclear war is one of them.
I don't think my argument says anything about how nuclear risk should be prioritized relative to other X-risks, I think the arguments for deprioritizing it relative to others are strong and reasonable people can disagree; YMMV.
If you leave 1,000 - 10,000 humans alive, the longterm future is probably fine
This is a very common claim that I think needs to be defended somewhat more robustly instead of simply assumed. If we have one strength as a community, is in not simply assuming things.
My read is that the evidence here is quite limited, the outside view suggests that losing 99.9999% of a species / having a very small population is a significant extinction risk, and that the uncertainty around the long-term viability of collapse scenarios is enough reason to want to avoid near-extinction events.
Has there been any formal probabilistic risk assessment on AI X-risk? e.g. fault tree analysis or event tree analysis — anything of that sort?
I disagree with the valence of the comment, but think it reflects legitimate concerns.
I am not worried that "HLI's institutional agenda corrupts its ability to conduct fair-minded and even-handed assessment." I agree that there are some ways that HLI's pro-SWB-measurement stance can bleed into overly optimistic analytic choices, but we are not simply taking analyses by our research partners on faith and I hope no one else is either. Indeed, the very reason HLI's mistakes are obvious is that they have been transparent and responsive to criticism.
We disagree with HLI about SM's rating — we use HLI's work as a starting point and arrive at an undiscounted rating of 5-6x; subjective discounts place it between 1-2x, which squares with GiveWell's analysis. But our analysis was facilitated significantly by HLI's work, which remains useful despite its flaws.
I guess I would very slightly adjust my sense of HLI, but I wouldn't really think of this as an "error." I don't significantly adjust my view of GiveWell when they delist a charity based on new information.
I think if the RCT downgrades StrongMinds' work by a big factor, that won't really introduce new information about HLI's methodology/expertise. If you think there are methodological weaknesses that would cause them to overstate StrongMinds' impact, those weaknesses should be visible now, irrespective of the RCT results.
I can also vouch for HLI. Per John Salter's comment, I may also have been a little sus early (sorry Michael) on but HLI's work has been extremely valuable for our own methodology improvements at Founders Pledge. The whole team is great, and I will second John's comment to the effect that Joel's expertise is really rare and that HLI seems to be the right home for it.
Just a note here as the author of that lobbying post you cite: the CEA including the 2.5% change in chance of success is intended to be illustrative — well, conservative, but it's based on nothing more than a rough sense of effect magnitude from having read all those studies for the lit review. The specific figures included in the CEA are very rough. As Stephen Clare pointed out in the comments, it's also probably not realistic to have modeled that is normal on the [0,5] 95% CI.
Hey Vasco, you make lots of good points here that are worth considering at length. These are topics we've discussed on and off in a fairly unstructured way on the research team at FP, and I'm afraid I'm not sure what's next when it comes to tackling them. We don't currently have a researcher dedicated to animal welfare, and our recommendations in that space have historically come from partner orgs.
Just as context, the reason for this is that FP has historically separated our recommendations into three "worldviews" (longtermism, current generations, and animal welfare). The idea is that it's a lot easier to shift member grantmaking across causes within a worldview (e.g. from rare diseases to malaria, for instance) than across worldviews (e.g. to get people to care much more about chickens). The upshot of this, for better or for worse, is that we end up spending a lot of time prioritizing causes within worldviews, and avoiding the question of how to prioritize across worldviews.
This is also part of the reason we don't have a dedicated animal welfare researcher — we haven't historically moved as much money within that worldview as within our others. But it's actually not sure which way the causality flows in that case, so your post is a good nudge to think more seriously about this, as well as the ways we might be able to incorporate animal welfare considerations into our GHD calculations, worldview separations notwithstanding.
Hey Matthew, thanks for sharing this. Can you provide some more information (or link to your thoughts elsewhere) on why fervor around UV-C is misplaced? As you know, ASHRAE Standards 185.1 and 185.2 concern testing of UV devices for germicidal irradiation, so I'd be particularly interested to know if this was an area that ASHRAE itself had concluded was unpromising.
I think your arguments do suggest good reasons why nuclear risk might be prioritized lower; since we operate on the most effective margin, as you note, it is also possible at the same time for there to be significant funding margins in nuclear that are highly effective in expectation.