SimonBeard

SimonBeard's Posts

Sorted by New

SimonBeard's Comments

Climate Change Is, In General, Not An Existential Risk

I think we might disagree about what constitutes a near miss or precipitating event. I certainly think that we should worry about such events even if their probability of leading to a nuclear exchange are pretty low (0.001 lets say) and that it would not be merely a matter of luck to have had 60 such events and no nuclear conflict, it is just that given the damage such a conflict would do they still reprasent an unaceptable threat.

The precise role played by climate change in increasing our vulnerability to such threats depends on the nature of the event. I certainly think that just limiting yourself to a single narrative like migration —> instability —> conflict is far to restrictive.

One of the big issues here is that climate change is percieved as posing an existential threat both to humanity generally (we can argue about the rights and wrongs of that, but the perception is real) and to specific groups and communities (I think that is a less contraversial claim). As such I think it is quite a dangerous element in international relations - especially when it is combined with narratives about individual and national reponsibility, free riding and so on. Of course you are right to point out that climate change is probably not an existential threat to either the USA or Russia but it will be a much bigger problem for India and Pakistan and for client states of global superpowers.

Climate Change Is, In General, Not An Existential Risk

We are indeed writing something on this (sorry it is taking so long!). I would dispute your characterization of the principle contributor of climate change to nuclear war though. Working on Barrett and Baum’s recent model of how nuclear war’s might occure I would argue that the greatest threat from climate change is that it creates conditions under which a prec[ititating event such as a regional war might escalate into a nuclear conflict are more likely - i.e. it increases our vulnerability to such threats. This is probably more significant than its direct impact on the number of precipitating events. Since such events are not actually that uncommon (Barret and Baum find over 60 I seem to remember whilst a Chatham House survey found around 20) I think that any increase in our vulnerability to these events would not be insignificant.

What is certainly correct is that the nature of the threat posed by climate change is very different in many ways to that posed by AI. Indeed the pathways from threat to catastrophe for anything other than AI (including pandemics, nuclear weapons, asteroids and so on) are generally complex and circuitous. On the one hand that does make these threats less of a concern because it offers multiple opportunities for mitigation and prevention. However, on the other hand, it makes them harder to study and assess, especially by the generally small research teams of generalists and philosophers who undertake the majority of x-risk research (I am not patronising anyone here, that is my background as well).

Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018)

Ha, your right! Millet and Snyder Beattie use two seperate methods to make two similar claims, however whilst I have listed both methods here I have accidently linked them to the same claim. I’ll correct this tomorrow.

EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday)

I am a researcher at the Centre for the Study of Existential Risk. Given the short time frame here I thought it worth saying that if anyone is interested in applying for this and would like to work on a project that may be assised by partnershing with a more established X-risk org then I would be happy to hear from you and will make sure to turn around any e-mails in as little time as possible. You can reach me at sjb316@cam.ac.uk.

Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018)

This paper very much builds upon a more detailed working paper published by the authors in 2016 (https://www.nber.org/papers/w22137.pdf). That seemed to receive a reasonable amount of discussion at the time whilst I was working at FHI and it has certainly been one of my go to resources for pandemic influenza since, but there are definately some problems with it.

The first thing that you need to know is that when the authors talk about ‘extreme events’ this may not quite mean what you think. They basically divide all possible influenza pandemics into two classes, moderate and extreme, and their conclusion is based on the fact that they find that 95% of all costs are due to the extreme events. However, in the working paper (and hence it seems in this paper) they define an ‘extreme’ pandemic as one that increases global mortality by more than 0.01%. That is definately not to be laughed at in terms of impact relative to other pandemics , it means that such a pandemic would kill at least 750,000 which is way higher than something like ebola (though these deaths are likely to fall disproportionately on the elderly and the already sick). However the key point is that when you look at their analysis it turns out that the reason why 95% are attributable to pandemics that fall into the extreme catagory is that less severe pandemics are actually surprisingly rare. The figures they use give a return rate of 50 years for a pandemic of moderate severity but 63 years for a pandemic of extreme severity! To be honest I have not gone through the new paper in enough detail to be sure that they are doing the same thing here, but when you see the figures they actually present in the working paper it is hard not to conclude that they carved up their dataset in the wrong way and should have set the threashold for extreme pandemics higher (or else included pandemics caused by less dangerous pathogens as well).

The other key thing is that although they note that a single pandemic (The Spanish Flu of 1918) plays an extremely disproportionate role in their analysis due to the fact that it may actually have contributed the majority (or even the vast majority) of all influenza deaths over their study period of the last 250 years they show no real curiosity about the potential for even worse pandemics that may have plaid a similar disproportionate role if they had tried to extend this, say to 2,000 years (the black death and the plague of Justinian come to mind). So whilst they conclude that the costs of pandemic influenza have a fat tail distribution they don’t really offer any insight into either the real fatness or the real length of this tail. I wouldn’t fault them on this in particular, since investigating such historical events would really be pushing the bounds of epedemiology. However it would be a clear issue if a GCR scholar just took these figures and ran with them, because we probably should be concerned about 1 in 2,000 year or 1 in 200,000 year pandemics, since that is where the greatest risk is likely to come from.

One final quick point is that you are right that it is surpising that not much attention was previously paid to the costs of influenza in terms of lost lives. However, there is an argument to be made that this study is no less shocking in that it only assesses the direct costs of pandemics and does not consider their potential indirect costs or systemic effects. Again I am not going to fault them for doing this as these are very hard to assess. However that is clearly a missing element in this analysis and I hope that in future studies we will be able to adress these as well.

What is the Most Helpful Categorical Breakdown of Normative Ethics?

Can I ask why you actually want to catagorize ethics like this at all? I know that it is traditional and it can be very helpful when teaching ethics to set things out like this as if you don’t then students often miss the profound difference between different ethical theories. However a lot of exciting work has never fallen into these catagories, which in any case are basically post hoc classifications of the work of Aristotle, Plato and Mill. Hume’s work for instance is pretty clearly ‘none of the above’ and a lot of good creative work has been done over the past century or more in trying to break down the barriers between these schools (Sidgwick and Parfit being the two biggest names here, but by no means the only ones). Personally I think that there are a lot of good tools and poweful arguments to be found across the ethical spectrum and that so long as you apreciate the true breadty of diversity among ethical theories then breaking them down like this is no longer much help for anything really.

From an EA perspective I think that the one distinction that may be worth paying attention to, and that can fit into your ‘consiquentialism’ Vs ‘deontology and virtue ethics’ distinction, though it is not a perfect fit, are moral theories that can be incorporated into an ‘expected moral value’ framework and those that can’t. This is an important distinction because it places a limit on how far one can go in making precise judgements about what one ought to do in the face of uncertainty which is something that may be of concern to all EAs. However this is a distinction that emerges from practice rather than being baked into moral theories at the level of first principles and there are other aproaches, such as the ‘parliamentary model’ for handelling moral uncertainty, that seek to overcome it.

Current Estimates for Likelihood of X-Risk?

Hey Rhys

Thanks for prompting me on this. I was hoping to find time for a fuller reply to your but this will have to do, you only asked for the texture after all. My concerns are somewhat nebulous so please don't take this as any cast iron reason not to seek out estimates for the probability of different existential risks. However, I think they are important.

The first relates to the degree of uncertainty that surrounds any estimate of this kind and how it should be handled. There are actually several sources of this.

The first of these relates to the threshold for human extinction. We actually don't have very good models of how the human race might go extinct. Broadly speaking human beings are highly adaptable and we can of-course survive across an extremely wide range of habitats, at least with sufficient technology and planning. So roughly for human extinction to occur then a change must either be extremely profound (such as the destruction of the earth, our sun or the entire universe) very fast (such as a nuclear winter), something that can adapt to us (such as AGI or Aliens) or something that we chose not to adapt to (such as climate change). However, personally, I have a hard time even thinking about just what the limits of survivability might be. Now, it is relatively easy to cover this with a few simplifying assumptions. For instance that 10 degrees of climate change either-way would clearly represent an existential threat. However, these are only assumptions. Then there is the possibility that we will actually be more vulnerable to certain risks than it appears, for instance, that certain environmental changes might cause an irrevocable collapse in human civilization (or in the human microbiome if you are that way inclined. The Global Challenges Foundation used the concepts of 'infinite threshold' and 'infinite impact' to capture this kind of uncertainty, and I think they are useful concepts. However, they don't necaserilly speak to our concern to know about the probability of human extinction and x-risk, rather than that of potential x-risk triggers.

The other obvious source of uncertainty is the uncertainty about what will happen. This is more mundane in many ways, however when we are estimating the probability of an unprecedented event like this I think it is easy to understate the uncertainty inherent in such estimates, because there is simply so little data to contradict our main assumptions leading to overconfidence. The real issue with both of these however is not that uncertainty means that we should not put numerical values to the likelihood of anything, but that we are just incapable of dealing very well with numerical figures that are highly uncertain, especially where these are stated and debated in a public forum. Even if uncertainty ranges are presented, and they accurately reflect the degree of certainty the assessor can justifiably claim to have, they quickly get cut out with commentators preferring to focus on one simple figure, be it the mean, upper or lower bounds, to the exclusion of all else. This happens and we should not ignore the pitfalls it creates.

The second concern I have is about the context. In your post you mention the famous figure from the Stern review and this is a great example of what I mean. Stern came up with that figure for one reason, and one alone. He wanted to argue for the higher possible discount rate that he believed was ethically justified in order to give maximum credence to his conclusions (or if you are more cynical then perhaps 'he wanted to make it look like he was arguing for...'). However, he also thought that most economic arguments for discounting were not justified he was left with the conclusion that the only reason to prefer wellbeing today to wellbeing tomorrow was that there might be no tomorrow. His 0.1% chance of human extinction per year (Note that this is supposedly the 'background' rate by the way, it is definitely not the probability of a climate-induced extinction) was the highest figure he could propose that would not be taken as overly alarmest. If you think that sounds a bit off then reflect on the fact that the mortality rate in the UK at present is around 0.8%, so Stern was saying that one could expect more than 10% of human mortality in the near future to result from human extinction. I think that is not at all unreasonable, but I can see why he didn't want to put the background extinction risk any higher. Anyway, the key point here is that none of these considerations was really about producing any kind of estimate of the likelihood of human extinction, it was just a guess that he felt would be reasonably acceptable from the point of view of trying to push up the time discount rate a bit. However, of course, once it is out there it got used, and continues to get used, as if it was something quite different.

The third concern I have is that I think it can be at least somewhat problematic to break down exitential risks by threat, which people generally need to do if they are to assign probability estimates to them. To be fair, your are here interested in the probability of human extinction as a whole, which does not fact this particular problem. However many of the estimates that I have come across relate to specified threats. The issue here is that much of the damage from any particular threat comes from its systemic and cascading effects. For instance, when considering the existential threat from natural pandemics I am quite unconcerned that a naturally occurring (or even most man-made) pathogen might literally wipe out all of humanity, the selection pressures against that would be huge. I am somewhat more concerned that such a pandemic might cause a general breakdown in global order leading to massive global wars or the collapse of the global food supply. However, I am mostly concerned that a pandemic might cause a social collapse in a single state that possessed nuclear weapons leading to them becoming insecure. If I simply include this as the probability of either human extinction via pandemic or nuclear war then that seems to me to be misleading. However, if it got counted in both then this could lead to double counting later on. Of course, with great care and attention this sort of problem can be dealt with. However on the whole when people make assessments of the probability of existential risks they tend to pool together all the available information, much of which has been produced without any coordination making such double counting, or zero counting, not unlikely.

Please let me know if you would like me to try and write more about any of these issues (although to be honest I am currently quite stretched so this may have to wait a while). You may also be interested in a piece I Wrote with Peter Hurford and Catheryn Mercow (I won't lie, it was mostly they who wrote it) on how different EA organizations account for uncertainty which has had quite a bit of impact on my thinking http://effective-altruism.com/ea/193/how_do_ea_orgs_account_for_uncertainty_in_their/

Also if you haven't already seen it you might be interested in this piece by Eliezer Yudkowsky https://www.lesswrong.com/posts/AJ9dX59QXokZb35fk/when-not-to-use-probabilities

PS: Obviously these concerns have not yet lead me to give up on working with these kinds of estimates, and indeed I would like them to be made better in the future. However they still trouble me.

Current Estimates for Likelihood of X-Risk?

We are indeed keen to get comments and feedback. Also note that the final 1/3rd or so of the paper is an extensive catalogue of assessments of the probability of different risks in which we try to incorporate all the sources we could find (though we are very happy if others know of more of these).

I will say however that the overwhelming sense I got in doing this study is that it is sometimes best not to put this kind of number on risks.