Simon_Beard

Posts

Sorted by New

Topic Contributions

Comments

Climate Change Is, In General, Not An Existential Risk

Of course it will be smaller, however that does mean that tackelling climate change will not make a sizeable contribution towards reducing the risk of nuclear winter. The question for me is whether nuclear winters that relate to climate change are more or less tractable than nuclear winter as a whole. My view would be that trying to reduce the risk of nuclear winter by tackelling climate change and its consequences may be a more tractable problem then doing so by trying to get nuclear weapons states to disarm or otherwise making nuclear war less likely in general, but that efforts to make nuclear winter more survivable are probably more efficient than either of these policies from a purely x-risk reduction perspective.

However, I also do not think that nuclear winter is the only way in which climate change may lead to an existential threat (at least reading existential threat to include the prospect for an unrecoverable from civilisational collapse) as there are some interesting feedback loops between environmental and social collapse that have the potential to cause non-linear and self-perpetuating shifts in the structure of global civilisation. Admittedly these are hard to study, but from a value maximisation perspective I would say that in the face of uncertainty we will do better if we assume that global civilisation is relatively fragile to such changes than if we assume that it is more robust to them.

Climate Change Is, In General, Not An Existential Risk

I think we might disagree about what constitutes a near miss or precipitating event. I certainly think that we should worry about such events even if their probability of leading to a nuclear exchange are pretty low (0.001 lets say) and that it would not be merely a matter of luck to have had 60 such events and no nuclear conflict, it is just that given the damage such a conflict would do they still reprasent an unaceptable threat.

The precise role played by climate change in increasing our vulnerability to such threats depends on the nature of the event. I certainly think that just limiting yourself to a single narrative like migration —> instability —> conflict is far to restrictive.

One of the big issues here is that climate change is percieved as posing an existential threat both to humanity generally (we can argue about the rights and wrongs of that, but the perception is real) and to specific groups and communities (I think that is a less contraversial claim). As such I think it is quite a dangerous element in international relations - especially when it is combined with narratives about individual and national reponsibility, free riding and so on. Of course you are right to point out that climate change is probably not an existential threat to either the USA or Russia but it will be a much bigger problem for India and Pakistan and for client states of global superpowers.

Climate Change Is, In General, Not An Existential Risk

We are indeed writing something on this (sorry it is taking so long!). I would dispute your characterization of the principle contributor of climate change to nuclear war though. Working on Barrett and Baum’s recent model of how nuclear war’s might occure I would argue that the greatest threat from climate change is that it creates conditions under which a prec[ititating event such as a regional war might escalate into a nuclear conflict are more likely - i.e. it increases our vulnerability to such threats. This is probably more significant than its direct impact on the number of precipitating events. Since such events are not actually that uncommon (Barret and Baum find over 60 I seem to remember whilst a Chatham House survey found around 20) I think that any increase in our vulnerability to these events would not be insignificant.

What is certainly correct is that the nature of the threat posed by climate change is very different in many ways to that posed by AI. Indeed the pathways from threat to catastrophe for anything other than AI (including pandemics, nuclear weapons, asteroids and so on) are generally complex and circuitous. On the one hand that does make these threats less of a concern because it offers multiple opportunities for mitigation and prevention. However, on the other hand, it makes them harder to study and assess, especially by the generally small research teams of generalists and philosophers who undertake the majority of x-risk research (I am not patronising anyone here, that is my background as well).

Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018)

I hope that this is now fixed at last, although I stress that this is very much a work in progress

Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018)

Ha, your right! Millet and Snyder Beattie use two seperate methods to make two similar claims, however whilst I have listed both methods here I have accidently linked them to the same claim. I’ll correct this tomorrow.

Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018)

Below I paste a really brief summary of some papers you might find interesting. This is taken from a draft literature review I have been working on with others, into methods for quantifying existential risk, hence its particular format. I hope you find it useful and would be grateful to heard if anyone knows of any good papers we have missed.

My personal takeaway from this exercise with regard to the risk from pandemics is that many GCR scholars may be overstating the risks from a major pandemic at the 'spanish flu' level (and one thing I should have mentioned in the preceeding comment is that when one takes account of the systemic impacts of such a pandemic these effects may actually decrease, for instance taking account of the social and economic impacts of the black death, the overall impact of that pandemic may have been net positive, although that is contraversial). However, on the other hand, many non-GCR scholars of pandemic may be understating the likelihood of a pandemic that would be considerably worse than the Spanish flu (both from naturally occurring and engineered pathogens). These two things do not necessarily cancel each other out!

1. Source: Troy Day, Jean-Baptiste Andre & Andrew Park, 2006, “The Evolutionary Emergence of Pandemic Influenza”, Proceedings of the Royal Society – Biological Sciences, 273, pp. 2945-2953.

Probability: The Probability of a pandemic occurring in any given year is 4%. A conservative estimate of the 95% support interval for the yearly pandemic probability is 0.7–7.6%.

Methodology: This probability is derived from combining ‘anecdotal’ evidence about the number of influenza pandemics over the past 250 years with more recent data about the expected interval between pandemics emerging.[1]Evidence was combined using a well defined Bayesian formula set out in an appendix to the paper.

2. Source: Madhav, N. (2013). Modelling a modern-day Spanish flu pandemic. AIR Worldwide, February, 21, 2013.

Probability: There is a 0.5-1%annual probability of a ‘modern day Spanish Flu’ event, with similar characteristics to the 1918 pandemic including considerable excess deaths amongst young adults. Such a pandemic would likely cause between 21 and 33 million deaths worldwide.

Methodology: The AIR Pandemic Flu Model, which combines demographic and epidemiological, demographic and technological modelling to produce a complete model for pandemic influenza. This model has been extensively peer reviewed.

3. Source: Fan, V. Y., Jamison, D. T., & Summers, L. H. (2016). The inclusive cost of pandemic influenza risk. National Bureau of Economic Research. [this has now been partially published as an, V. Y., Jamison, D. T., & Summers, L. H. (2018). Pandemic risk: how large are the expected losses?. Bulletin of the World Health Organization, 96(2), 129.]

Prediction: The annual probability of a severe influenza pandemic (one that increases global mortality by at least 0.1%) is 1.6%and the average impact of such pandemics is a global mortality increase of 0.58% (±40 million fatalities). Severe flu pandemics represent 95% of the costs associated with all pandemic influenza.

Methodology: The historical record was used to estimate the total frequency and severity of all influenza pandemics and to generate likely age-specific death rates as a result of a global pandemic. The U.S.’ historical age distributions, being the most[JF1] complete, were used as the template for global age distributions. The authors then model the “expected deaths from pandemic influenza risks” with a highly fat tailed distribution of mortality meaning that the vast majority of deaths occurring from the most severe pandemics.

4. Source: Bagus, G. (2008) Pandemic Risk Modelling. Chicago Actuarial Association

Estimate: A pandemic of the scale of the Spanish Flu, causing a ±27% increase in mortality, occur around once every 420 years. More severe pandemics causing a ±42% increase in global mortality may have a return rate of 2,700 years

Methodology: An ‘actuarial model’ is constructed in the form of a severity curve based on historical data for the past 420 years of influenza outbreaks and was found to approximate an exponential curve. This was then extrapolated to estimate the probability and severity of more extreme pandemics. The model takes account of shifting demographic features over time but assumes that pandemics have equal severity across all countries.

5. Source: Klotz, L. C., & Sylvester, E. J. (2014). The Consequences of a Lab Escape of a Potential Pandemic Pathogen. Frontiers in Public Health, 2. 

Prediction:The likelihood of a pandemic, through an undetected lab-acquired infection, “could be as high as 27%” over a 10-year research period.

Methodology:The authors take the annual probability per lab of an escape of a virus through an undetected lab-acquired infection (LAI) to be 2.4%. This statistic is taken from the Department of Homeland Security’s risk assessment for a planned National Bio- and Agro-defence Facility in Manhattan, Kansa. They then assume that a research enterprise will comprise of 10 labs working for 10 years to make a virus. So, across this period, the probability of no escape through an LAI will be 0.088. Therefore, the probability of at least one escape from the enterprise through an LAI will be 91%. This is multiplied by the assumed, as a worst-case scenario,likelihood of one LAI leading to a pandemic, 30%, to give the overall prediction.

6. Source: Marc Lipsitch & Thomas V. Inglesby, “Moratorium on research intended to create novel potential pandemic pathogens”, MBio, 5, 2014, pp. 1-6.

Probability: Each laboratory-year of Gain of Function research into virulent, transmissible influenza virus might have an 0.01% to 0.1%chance of triggering a global infection via an accidental laboratory escape. Such a pandemic could be expected to kill between 2 million and 1.4 billion people.

Methodology: The risk of a global pandemic resulting from a laboratory escape of influenza is determined from multiplying two different probabilities. The first is the risk of laboratory incidents and accidental infections in biosafety level 3 laboratories in which such research may be conducted (estimated to be between 0.2%, on the basis that 4 infections have been observed over <2,044 laboratory-years of observation, and 1%, using data from the National Institute of Allergies and Infectious Diseases). The second is the probability that an accidental infection of a working lab could lead to a laboratory escape spreading widely around the world (estimated to be between 5% and 60% according to a range of simulation models, with the authors own model indicating a 10-20% risk).

Noting that “readily transmissible influenza, once widespread, has never before been controlled before it spreads globally,” the expected severity of such a pandemic is determined by multiplying the historical infection rate of influenza pandemics (24-38%) by possible values for the case-rate fatality of a novel, virulent influenza strain (1-60%). However, it is unlikely that these two figures vary independently and so simple multiplication is likely to be inappropriate.

7. Source: Ron Fouchier, 2015, “Studies on Influenza Virus Transmission between Ferrets: the Public Health Risks Revisited”, MBio, Vol. 6, No. 1, pp. 1-4.

Probability: Each laboratory-year of Gain of Function research into virulent, transmissible influenza virus might have an 2.5x10-13to 3x10-12chance of triggering a global infection via an accidental laboratory escape.

Methodology: This paper is a direct response to Lipsitch and Inglesby (2014), arguing that their estimates “were based on historical data and did not take into account the numerous risk reduction measures that are in place in the laboratories where the research is conducted.”

8. Source: Piers Millett and Andrew Snyder-Beattie, 2017, “Existential Risk and Cost-Effective Biosecurity”,Health Security, Vol. 15, No. 4, pp. 1-11.

Probability: The annual probability of an existential catastrophe arising from a global pandemic is between 8 x 10-5 and 1.6 x 10-8.

Methodology: The authors construct a toy model to assess this risk, citing a Gryphon Scientific report (2015) as suggesting that the annual probability of a global pandemic arising from an accident with research into Potentially Pandemic Pathogens (PPP) in the US is 0.002% to 0.1%.[2]Next, they note that: “The Gryphon report also concluded that risks of deliberate misuse were about as serious as the risks of an accidental outbreak, suggesting a twofold increase in risk. Assuming that 25% of relevant research is done in the US as opposed to elsewhere in the world, gives us a further fourfold increase in risk. In total, this eightfold increase in risk gives us a 0.016% to 0.8% chance of a pandemic in the future each year.”

Next, the authors directly estimate the probability that a pandemic will cause an existential catastrophe and combine with this with the previous probability: “For the purposes of this model, we assume that for any global pandemic arising from this kind of research, each has only a one in ten thousand chance of causing an existential risk.”[3]

9. Source: Piers Millett and Andrew Snyder-Beattie, 2017, “Existential Risk and Cost-Effective Biosecurity”,Health Security, Vol. 15, No. 4, pp. 1-11.

Probability: The annual probability of an existential catastrophe resulting from biowarfare or bioterrorism is0.0000019 (or 1.9x10-6).

Methodology: The authors assume that the casualty numbers from terrorism and warfare follow a power law distribution. Previous studies have determined the power law exponent for terrorism using chemical or biological weapons to be -0.5. This means that for every order of magnitude increase in casualties from a terrorist attack, the probability of that attack occurring is multiplied by a factor 10-0.5, which is approximately 1/3. Assuming one attack per year, the annual probability that an attack kills more than 5 billion people will be (5 billion)-0.5, which is 0.000014 or 1.4x10-5. Historical data gives the power law exponent for warfare to be 0.41 and the authors assume 1 new war every other year and that bioweapons are used in 10% of wars. Therefore, the annual probability that a war involving biological weapons kills more than 5 billion people is 0.5x0.1x(5 billion)-0.41, which is 0.000005 or 5x10-6. The authors assume that of all wars or terrorist attacks that kill more than 5 billion people, 10% of these would lead to extinction. Therefore, the authors reach an annual probability of existential catastrophe from biowarfare or bioterrorism of 1.9x10-6.

10.Source: Sandberg, A. & Bostrom, N. (2008): “Global Catastrophic Risks Survey”, Technical Report #2008-1, Future of Humanity Institute, Oxford University: pp. 1-5.

Prediction: 2%chance of human extinction being caused by an engineered pandemic and 0.05%chance of it being caused by a natural pandemic.

Method: Median response of an informal survey of 13 participants at the 2008 Oxford Conference on Global Catastrophic Risk. Participants were surveyed on their estimate of human extinction, the death of more than 1 billion people and the death of more than 1 million people from a list of 8 specific threats. However, this list was not taken to be exhaustive.

11. Source: Dennis Pamlin, & Stuart Armstrong, 2015, Global Challenges: 12 Risks that Threaten Human Civilisation, Global Challenges Foundation.

Probability: “Based on available assessments the best current estimate of a global pandemic in the next 100 years is: 5%for infinite threshold [and] 0.0001%for infinite impact” (p. 150).[4]

‘Infinite impact’ refers to the state where civilization collapses and does not recover, or a situation where all human life ends. ‘Infinite threshold’ refers to a scenario that has the potential to lead to such a collapse, dependent upon other factors (Dennis & Armstrong, 2015: 11).

Methodology: This is one of a series of risk specific predictions that resulted from a large, informal, structured expert elicitation exercise conducted by the Global Challenges Foundation. This constituted an “expert review” of the relevant literature for each risk following which “[two] workshops were arranged where the selection of challenges was discussed, one with risk experts in Oxford at the Future of Humanity Institute and the other in London with experts from the financial sector.” Based on all the evidence that was gathered, probability estimates were produced for each risk (p. 12).

[1]The authors back up this claim with evidence from the following sources: Robert G. Webster, 1998, Influenza: An Emerging Disease”, Emerging Infectious Diseases, Vol. 4, No. 3, pp. 436-441, p. 437, and Ann H. Reid, Jeffery. Taubenberger & Thomas G. Fanning, 2004, “Evidence of an Absence: the Genetic Origins of the 1918 Pandemic Influenza Virus”, Nature Reviews Microbiology, 2, pp. 909-914.

[2]There is no explicit reference to these particular probabilities in the original report.

[3]The authors state that this figure is a “conservative guess”. It is not precisely clear whether the authors mean that one in ten thousand pandemics are predicted to causeextinction, or whether one in ten pandemics will have a risk of extinction. The latter reading is implausible because surely there is at least a risk, however small, that any global pandemic would cause extinction.

[4]Stated sources include: Bagus, Ghalid (2008): Pandemic Risk Modeling http://www.chicagoactuarialassociation.org/CAA_PandemicRiskModelingBagus_Jun08.pdf; Broekhoven, Henk van, Hellman, Anni (2006): Actuarial reflections on pandemic risk and its consequences http://actuary.eu/documents/pandemics_web.pdf; Brockmann, Dirk and Helbing, Dirk (2013): The Hidden Geometry of Complex, Network-Driven Contagion Phenomena SCIENCE VOL 342 http://rocs.hu-berlin.de/resources/HiddenGeometryPaper.pdf, W. Bruine de Bruin, B. Fischhoff; L. Brilliant and D. Caruso (2006): Expert judgments of pandemic influenza risks, Global Public Health, June 2006; 1(2): 178193http://www.cmu.edu/dietrich/sds/docs/fischhoff/AF-GPH.pdf; Khan K, Sears J, Hu VW, Brownstein JS, Hay S, Kossowsky D, Eckhardt R, Chim T, Berry I, Bogoch I, Cetron M.: Potential for the International Spread of Middle East Respiratory Syndrome in Association with Mass Gatherings in Saudi Arabia. PLOS Currents utbreaks. 2013 Jul 17. http://currents.plos.org/outbreaks/article/assessing-riskfor-the-international-spread-of-middle-east-respiratorysyndrome-in-association-with-mass-gatherings-insaudi-arabia/; Murray, Christopher JL, et al.: Estimation of potential global pandemic influenza mortality on the basis of vital registry data from the 1918–20 pandemic: a quantitative analysis. The Lancet 368.9554 (2007): 2211-2218. http://www.thelancet.com/journals/lancet/article/PIIS0140-6736(06)69895-4/fulltextand Sandman , Peter M. (2007): Talking about a flu pandemic worst-case scenario, http://www.cidrap.umn.edu/newsperspective/2007/03/talking-about-flu-pandemic-worstcase-scenario

EA Funds: Long-Term Future fund is open to applications until November 24th (this Saturday)

I am a researcher at the Centre for the Study of Existential Risk. Given the short time frame here I thought it worth saying that if anyone is interested in applying for this and would like to work on a project that may be assised by partnershing with a more established X-risk org then I would be happy to hear from you and will make sure to turn around any e-mails in as little time as possible. You can reach me at sjb316@cam.ac.uk.

Pandemic Risk: How Large are the Expected Losses? Fan, Jamison, & Summers (2018)

This paper very much builds upon a more detailed working paper published by the authors in 2016 (https://www.nber.org/papers/w22137.pdf). That seemed to receive a reasonable amount of discussion at the time whilst I was working at FHI and it has certainly been one of my go to resources for pandemic influenza since, but there are definately some problems with it.

The first thing that you need to know is that when the authors talk about ‘extreme events’ this may not quite mean what you think. They basically divide all possible influenza pandemics into two classes, moderate and extreme, and their conclusion is based on the fact that they find that 95% of all costs are due to the extreme events. However, in the working paper (and hence it seems in this paper) they define an ‘extreme’ pandemic as one that increases global mortality by more than 0.01%. That is definately not to be laughed at in terms of impact relative to other pandemics , it means that such a pandemic would kill at least 750,000 which is way higher than something like ebola (though these deaths are likely to fall disproportionately on the elderly and the already sick). However the key point is that when you look at their analysis it turns out that the reason why 95% are attributable to pandemics that fall into the extreme catagory is that less severe pandemics are actually surprisingly rare. The figures they use give a return rate of 50 years for a pandemic of moderate severity but 63 years for a pandemic of extreme severity! To be honest I have not gone through the new paper in enough detail to be sure that they are doing the same thing here, but when you see the figures they actually present in the working paper it is hard not to conclude that they carved up their dataset in the wrong way and should have set the threashold for extreme pandemics higher (or else included pandemics caused by less dangerous pathogens as well).

The other key thing is that although they note that a single pandemic (The Spanish Flu of 1918) plays an extremely disproportionate role in their analysis due to the fact that it may actually have contributed the majority (or even the vast majority) of all influenza deaths over their study period of the last 250 years they show no real curiosity about the potential for even worse pandemics that may have plaid a similar disproportionate role if they had tried to extend this, say to 2,000 years (the black death and the plague of Justinian come to mind). So whilst they conclude that the costs of pandemic influenza have a fat tail distribution they don’t really offer any insight into either the real fatness or the real length of this tail. I wouldn’t fault them on this in particular, since investigating such historical events would really be pushing the bounds of epedemiology. However it would be a clear issue if a GCR scholar just took these figures and ran with them, because we probably should be concerned about 1 in 2,000 year or 1 in 200,000 year pandemics, since that is where the greatest risk is likely to come from.

One final quick point is that you are right that it is surpising that not much attention was previously paid to the costs of influenza in terms of lost lives. However, there is an argument to be made that this study is no less shocking in that it only assesses the direct costs of pandemics and does not consider their potential indirect costs or systemic effects. Again I am not going to fault them for doing this as these are very hard to assess. However that is clearly a missing element in this analysis and I hope that in future studies we will be able to adress these as well.

What is the Most Helpful Categorical Breakdown of Normative Ethics?

Can I ask why you actually want to catagorize ethics like this at all? I know that it is traditional and it can be very helpful when teaching ethics to set things out like this as if you don’t then students often miss the profound difference between different ethical theories. However a lot of exciting work has never fallen into these catagories, which in any case are basically post hoc classifications of the work of Aristotle, Plato and Mill. Hume’s work for instance is pretty clearly ‘none of the above’ and a lot of good creative work has been done over the past century or more in trying to break down the barriers between these schools (Sidgwick and Parfit being the two biggest names here, but by no means the only ones). Personally I think that there are a lot of good tools and poweful arguments to be found across the ethical spectrum and that so long as you apreciate the true breadty of diversity among ethical theories then breaking them down like this is no longer much help for anything really.

From an EA perspective I think that the one distinction that may be worth paying attention to, and that can fit into your ‘consiquentialism’ Vs ‘deontology and virtue ethics’ distinction, though it is not a perfect fit, are moral theories that can be incorporated into an ‘expected moral value’ framework and those that can’t. This is an important distinction because it places a limit on how far one can go in making precise judgements about what one ought to do in the face of uncertainty which is something that may be of concern to all EAs. However this is a distinction that emerges from practice rather than being baked into moral theories at the level of first principles and there are other aproaches, such as the ‘parliamentary model’ for handelling moral uncertainty, that seek to overcome it.

Current Estimates for Likelihood of X-Risk?

Hey Rhys

Thanks for prompting me on this. I was hoping to find time for a fuller reply to your but this will have to do, you only asked for the texture after all. My concerns are somewhat nebulous so please don't take this as any cast iron reason not to seek out estimates for the probability of different existential risks. However, I think they are important.

The first relates to the degree of uncertainty that surrounds any estimate of this kind and how it should be handled. There are actually several sources of this.

The first of these relates to the threshold for human extinction. We actually don't have very good models of how the human race might go extinct. Broadly speaking human beings are highly adaptable and we can of-course survive across an extremely wide range of habitats, at least with sufficient technology and planning. So roughly for human extinction to occur then a change must either be extremely profound (such as the destruction of the earth, our sun or the entire universe) very fast (such as a nuclear winter), something that can adapt to us (such as AGI or Aliens) or something that we chose not to adapt to (such as climate change). However, personally, I have a hard time even thinking about just what the limits of survivability might be. Now, it is relatively easy to cover this with a few simplifying assumptions. For instance that 10 degrees of climate change either-way would clearly represent an existential threat. However, these are only assumptions. Then there is the possibility that we will actually be more vulnerable to certain risks than it appears, for instance, that certain environmental changes might cause an irrevocable collapse in human civilization (or in the human microbiome if you are that way inclined. The Global Challenges Foundation used the concepts of 'infinite threshold' and 'infinite impact' to capture this kind of uncertainty, and I think they are useful concepts. However, they don't necaserilly speak to our concern to know about the probability of human extinction and x-risk, rather than that of potential x-risk triggers.

The other obvious source of uncertainty is the uncertainty about what will happen. This is more mundane in many ways, however when we are estimating the probability of an unprecedented event like this I think it is easy to understate the uncertainty inherent in such estimates, because there is simply so little data to contradict our main assumptions leading to overconfidence. The real issue with both of these however is not that uncertainty means that we should not put numerical values to the likelihood of anything, but that we are just incapable of dealing very well with numerical figures that are highly uncertain, especially where these are stated and debated in a public forum. Even if uncertainty ranges are presented, and they accurately reflect the degree of certainty the assessor can justifiably claim to have, they quickly get cut out with commentators preferring to focus on one simple figure, be it the mean, upper or lower bounds, to the exclusion of all else. This happens and we should not ignore the pitfalls it creates.

The second concern I have is about the context. In your post you mention the famous figure from the Stern review and this is a great example of what I mean. Stern came up with that figure for one reason, and one alone. He wanted to argue for the higher possible discount rate that he believed was ethically justified in order to give maximum credence to his conclusions (or if you are more cynical then perhaps 'he wanted to make it look like he was arguing for...'). However, he also thought that most economic arguments for discounting were not justified he was left with the conclusion that the only reason to prefer wellbeing today to wellbeing tomorrow was that there might be no tomorrow. His 0.1% chance of human extinction per year (Note that this is supposedly the 'background' rate by the way, it is definitely not the probability of a climate-induced extinction) was the highest figure he could propose that would not be taken as overly alarmest. If you think that sounds a bit off then reflect on the fact that the mortality rate in the UK at present is around 0.8%, so Stern was saying that one could expect more than 10% of human mortality in the near future to result from human extinction. I think that is not at all unreasonable, but I can see why he didn't want to put the background extinction risk any higher. Anyway, the key point here is that none of these considerations was really about producing any kind of estimate of the likelihood of human extinction, it was just a guess that he felt would be reasonably acceptable from the point of view of trying to push up the time discount rate a bit. However, of course, once it is out there it got used, and continues to get used, as if it was something quite different.

The third concern I have is that I think it can be at least somewhat problematic to break down exitential risks by threat, which people generally need to do if they are to assign probability estimates to them. To be fair, your are here interested in the probability of human extinction as a whole, which does not fact this particular problem. However many of the estimates that I have come across relate to specified threats. The issue here is that much of the damage from any particular threat comes from its systemic and cascading effects. For instance, when considering the existential threat from natural pandemics I am quite unconcerned that a naturally occurring (or even most man-made) pathogen might literally wipe out all of humanity, the selection pressures against that would be huge. I am somewhat more concerned that such a pandemic might cause a general breakdown in global order leading to massive global wars or the collapse of the global food supply. However, I am mostly concerned that a pandemic might cause a social collapse in a single state that possessed nuclear weapons leading to them becoming insecure. If I simply include this as the probability of either human extinction via pandemic or nuclear war then that seems to me to be misleading. However, if it got counted in both then this could lead to double counting later on. Of course, with great care and attention this sort of problem can be dealt with. However on the whole when people make assessments of the probability of existential risks they tend to pool together all the available information, much of which has been produced without any coordination making such double counting, or zero counting, not unlikely.

Please let me know if you would like me to try and write more about any of these issues (although to be honest I am currently quite stretched so this may have to wait a while). You may also be interested in a piece I Wrote with Peter Hurford and Catheryn Mercow (I won't lie, it was mostly they who wrote it) on how different EA organizations account for uncertainty which has had quite a bit of impact on my thinking http://effective-altruism.com/ea/193/how_do_ea_orgs_account_for_uncertainty_in_their/

Also if you haven't already seen it you might be interested in this piece by Eliezer Yudkowsky https://www.lesswrong.com/posts/AJ9dX59QXokZb35fk/when-not-to-use-probabilities

PS: Obviously these concerns have not yet lead me to give up on working with these kinds of estimates, and indeed I would like them to be made better in the future. However they still trouble me.

Load More