Hide table of contents

[From Haydn - This month had some particularly interesting papers, on existential risk, nuclear winter, alternative foods and AI alignment, so I thought I'd cross post them and link to all the updates, stretching back to July 2019]

Each month, The Existential Risk Research Assessment (TERRA) uses a unique machine-learning model to predict those publications most relevant to existential risk or global catastrophic risk. Please note that we provide these citations and abstracts as a service to aid other researchers in paper discovery and that inclusion does not represent any kind of endorsement of this research by the Centre for the Study of Existential Risk or our researchers.

The following are the 13 papers identified in the January monthly update.

1. Apocalypse Now?  

This paper explores the ongoing Covid-19 pandemic through the framework of existential risks - a class of extreme risks that threaten the entire future of humanity. In doing so, we tease out three lessons: (1) possible reasons underlying the limits and shortfalls of international law, international institutions and other actors which Covid-19 has revealed, and what they reveal about the resilience or fragility of institutional frameworks in the face of existential risks; (2) using Covid-19 to test and refine our prior 'Boring Apocalypses' model for understanding the interplay of hazards, vulnerabilities and exposures in facilitating a particular disaster, or magnifying its effects; and (3) to extrapolate some possible futures for existential risk scholarship and governance.

2. ‘Solving for X?’ Towards a problem-finding framework to ground long-term governance strategies for artificial intelligence  

Change is hardly a new feature in human affairs. Yet something has begun to change in change. In the face of a range of emerging, complex, and interconnected global challenges, society's collective governance efforts may need to be put on a different footing. Many of these challenges derive from emerging technological developments – take Artificial Intelligence (AI), the focus of much contemporary governance scholarship and efforts. AI governance strategies have predominantly oriented themselves towards clear, discrete clusters of pre-defined problems. We argue that such ‘problem-solving’ approaches may be necessary, but are also insufficient in the face of many of the ‘wicked problems’ created or driven by AI. Accordingly, we propose in this paper a complementary framework for grounding long-term governance strategies for complex emerging issues such as AI into a ‘problem-finding’ orientation. We first provide a rationale by sketching the range of policy problems created by AI, and providing five reasons why problem-solving governance approaches to these challenges fail or fall short. We conversely argue that that creative, ‘problem-finding’ research into these governance challenges is not only warranted scientifically, but will also be critical in the formulation of governance strategies that are effective, meaningful, and resilient over the long-term. We accordingly illustrate the relation between and the complementarity of problem-solving and problem-finding research, by articulating a framework that distinguishes between four distinct ‘levels’ of governance: problem-solving research generally approaches AI (governance) issues from a perspective of (Level 0) ‘business-as-usual’ or as (Level 1) ‘governance puzzle-solving’. In contrast, problem-finding approaches emphasize (Level 2) ‘governance Disruptor-Finding’; or (Level 3) ‘Charting Macrostrategic Trajectories’. We apply this theoretical framework to contemporary governance debates around AI throughout our analysis to elaborate upon and to better illustrate our framework. We conclude with reflections on nuances, implications, and shortcomings of this long-term governance framework, offering a range of observations on intra-level failure modes, between-level complementarities, within-level path dependencies, and the categorical boundary conditions of governability (‘Governance Goldilocks Zone’). We suggest that this framework can help underpin more holistic approaches for long-term strategy-making across diverse policy domains and contexts, and help cross the bridge between concrete policies on local solutions, and longer-term considerations of path-dependent societal trajectories to avert, or joint visions towards which global communities can or should be rallied.

3. Examining the Climate Effects of a Regional Nuclear Weapons Exchange Using a Multiscale Atmospheric Modeling Approach  

ecent studies examine the potential for large urban fires ignited in a hypothetical nuclear exchange of one hundred 15 kt weapons between India and Pakistan to alter the climate (e.g., Mills et al., 2014, and Reisner et al., 2018). In this study, the global climate forcing and response is predicted by combining two atmospheric models, which together span the micro-scale to global scale processes involved. Individual fire plumes are modeled using the Weather Research and Forecasting (WRF) model, and the climate response is predicted by injecting the WRF-simulated black carbon (BC) emissions into the Energy Exascale Earth System Model (E3SM) atmosphere model Version 1 (EAMv1). Consistent with previous studies, the radiative forcing depends on smoke quantity and injection height, examined here as functions of fuel loading and atmospheric conditions. If the fuel burned is 1 g cm−2, BC is quickly removed from the troposphere, causing no global mean climate forcing. If the fuel burned is 16 g cm−2 and 100 such fires occurred simultaneously with characteristics similar to historical large urban firestorms, BC reaches the stratosphere, reducing solar radiation and causing cooling at the Earth's surface. Uncertainties in smoke composition and aerosol representation cause large uncertainties in the magnitude of the radiative forcing and cooling. The approximately 4 yr duration of the radiative forcing is shorter than the 8 to 15 yr that has previously been simulated. Uncertainties point to the need for further development of potential nuclear exchange scenarios, quantification of fuel loading, and improved understanding of fire propagation and aerosol modeling.

4. Accounting for violent conflict risk in planetary defense decisions 

This paper provides the first-ever survey of the implications of violent conflict risk for planetary defense program decisions. Arguably, the aim of planetary defense should be to make Earth safer from all threats, including but not limited to threats from near-Earth objects (NEOs). Insofar as planetary defense projects affect other risks besides NEOs, these other risks should be taken into account. This paper evaluates three potential effects of planetary defense programs on violent conflict risk. First, planetary defense may offer a constructive model for addressing a major global risk. By documenting the history of its successes and failures, the planetary defense community can aid efforts to address other global risks, including but not limited to violent conflict. Second, the proposed use of nuclear explosions for NEO deflection and disruption could affect the role of nuclear weapons in violent conflict risk. The effect may be such that nuclear deflection/disruption would increase aggregate risks to human society. However, the effect is difficult to assess, mainly due to ambiguities in violent conflict risk. Third, planetary defense could reduce violent conflict risk by addressing the possibility of NEO collisions being mistaken as violent attacks and inadvertently triggering violent conflict. False alarms mistaken as real attacks are a major concern, especially as a cause of nuclear war. Improved awareness of NEOs and communication between astronomers and military officials could help resolve NEO false alarms. Each of these three effects of planetary defense programs on violent conflict risk can benefit from interaction between the communities that study and address NEO and violent conflict risks.

5. Risks of space colonization 

Space colonization is humankind's best bet for long-term survival. This makes the expected moral value of space colonization immense. However, colonizing space also creates risks — risks whose potential harm could easily overshadow all the benefits of humankind's long-term future. In this article, I present a preliminary overview of some major risks of space colonization: Prioritization risks, aberration risks, and conflict risks. Each of these risk types contains risks that can create enormous disvalue; in some cases orders of magnitude more disvalue than all the potential positive value humankind could have. From a (weakly) negative, suffering-focused utilitarian view, we therefore have the obligation to mitigate space colonization-related risks and make space colonization as safe as possible. In order to do so, we need to start working on real-world space colonization governance. Given the near total lack of progress in the domain of space governance in recent decades, however, it is uncertain whether meaningful space colonization governance can be established in the near future, and before it is too late.

6. Food in space from hydrogen-oxidizing bacteria  

The cost of launching food into space is very high. An alternative is to make food during missions using methods such as artificial light photosynthesis, greenhouse, nonbiological synthesis of food, electric bacteria, and hydrogen-oxidizing bacteria (HOB). This study compares prepackaged food, artificial light microalgae, and HOB. The dominant factor for each alternative is its relative mass due to high fuel cost needed to launch a payload into space. Thus, alternatives were evaluated using an equivalent system mass (ESM) technique developed by the National Aeronautics and Space Administration. Three distinct missions with a crew of 5 for a duration of 3 years were analyzed; including the International Space Station (ISS), the Moon, and Mars. The components of ESM considered were apparent mass, heat rejection, power, and pressurized volume. The selected power source for all systems was nuclear power. Electricity to biomass efficiencies were calculated for space to be 18% and 4.0% for HOB and microalgae, respectively. This study indicates that growing HOB is the least expensive alternative. The ESM of the HOB is on average a factor of 2.8 and 5.5 less than prepackaged food and microalgae, respectively. This alternative food study also relates to feeding Earth during a global agricultural catastrophe. Benefits of HOB include recycling wastes including CO2 and producing O2. Practical systems would involve a variety of food sources.

7. Potential of microbial protein from hydrogen for preventing mass starvation in catastrophic scenarios 

Human civilization's food production system is currently unprepared for catastrophes that would reduce global food production by 10% or more, such as nuclear winter, supervolcanic eruptions or asteroid impacts. Alternative foods that do not require much or any sunlight have been proposed as a more cost-effective solution than increasing food stockpiles, given the long duration of many global catastrophic risks (GCRs) that could hamper conventional agriculture for 5 to 10 years. Microbial food from single cell protein (SCP) produced via hydrogen from both gasification and electrolysis is analyzed in this study as alternative food for the most severe food shock scenario: a sun-blocking catastrophe. Capital costs, resource requirements and ramp up rates are quantified to determine its viability. Potential bottlenecks to fast deployment of the technology are reviewed. The ramp up speed of food production for 24/7 construction of the facilities over 6 years is estimated to be lower than other alternatives (3-10% of the global protein requirements could be fulfilled at end of first year), but the nutritional quality of the microbial protein is higher than for most other alternative foods for catastrophes. Results suggest that investment in SCP ramp up should be limited to the production capacity that is needed to fulfill only the minimum recommended protein requirements of humanity during the catastrophe. Further research is needed into more uncertain concerns such as transferability of labor and equipment production. This could help reduce the negative impact of potential food-related GCRs.

8. Methodologies and Milestones for the Development of an Ethical Seed  

With the goal of reducing more sources of existential risk than are generated through advancing technologies, it is important to keep their ethical standards and causal implications in mind. With sapient and sentient machine intelligences this becomes important in proportion to growth, which is potentially exponential. To this end, we discuss several methods for generating ethical seeds in human-analogous machine intelligence. We also discuss preliminary results from the application of one of these methods in particular with regards to AGI Inc’s Mediated Artificial Superintelligence named Uplift. Examples are also given of Uplift’s responses during this process.

9. Putting the humanity into inhuman systems: How human factors and ergonomics can be used to manage the risks associated with artificial general intelligence  

The next generation of artificial intelligence, known as artificial general intelligence (AGI) could either revolutionize or destroy humanity. As the discipline which focuses on enhancing human health and wellbeing, human factors and ergonomics (HFE) has a crucial role to play in the conception, design, and operation of AGI systems. Despite this, there has been little examination as to how HFE can influence and direct this evolution. This study uses a hypothetical AGI system, Tegmark's “Prometheus,” to frame the role of HFE in managing the risks associated with AGI. Fifteen categories of HFE method are identified and their potential role in AGI system design is considered. The findings suggest that all categories of HFE method can contribute to AGI system design; however, areas where certain methods require extension are identified. It is concluded that HFE can and should contribute to AGI system design and immediate effort is required to facilitate this goal. In closing, we explicate some of the work required to embed HFE in wider multi-disciplinary efforts aiming to create safe and efficient AGI systems.

10. Challenges of aligning artificial intelligence with human values 

As artificial intelligence (AI) systems are becoming increasingly autonomous and will soon be able to make decisions on their own about what to do, AI researchers have started to talk about the need to align AI with human values. The AI 'value alignment problem' faces two kinds of challenges-a technical and a normative one-which are interrelated. The technical challenge deals with the question of how to encode human values in artificial intelligence. The normative challenge is associated with two questions: “Which values or whose values should artificial intelligence align with?” My concern is that AI developers underestimate the difficulty of answering the normative question. They hope that we can easily identify the purposes we really desire and that they can focus on the design of those objectives. But how are we to decide which objectives or values to induce in AI, given that there is a plurality of values and moral principles and that our everyday life is full of moral disagreements? In my paper I will show that although it is not realistic to reach an agreement on what we, humans, really want as people value different things and seek different ends, it may be possible to agree on what we do not want to happen, considering the possibility that intelligence, equal to our own, or even exceeding it, can be created. I will argue for pluralism (and not for relativism!) which is compatible with objectivism. In spite of the fact that there is no uniquely best solution to every moral problem, it is still possible to identify which answers are wrong. And this is where we should begin the value alignment of AI.

11. Norms for beneficial A.I.: A computational analysis of the societal value alignment problem  

The rise of artificial intelligence (A.I.) based systems is already offering substantial benefits to the society as a whole. However, these systems may also enclose potential conflicts and unintended consequences. Notably, people will tend to adopt an A.I. system if it confers them an advantage, at which point non-adopters might push for a strong regulation if that advantage for adopters is at a cost for them. Here we propose an agent-based game-theoretical model for these conflicts, where agents may decide to resort to A.I. to use and acquire additional information on the payoffs of a stochastic game, striving to bring insights from simulation to what has been, hitherto, a mostly philosophical discussion. We frame our results under the current discussion on ethical A.I. and the conflict between individual and societal gains: the societal value alignment problem. We test the arising equilibria in the adoption of A.I. technology under different norms followed by artificial agents, their ensuing benefits, and the emergent levels of wealth inequality. We show that without any regulation, purely selfish A.I. systems will have the strongest advantage, even when a utilitarian A.I. provides significant benefits for the individual and the society. Nevertheless, we show that it is possible to develop A.I. systems following human conscious policies that, when introduced in society, lead to an equilibrium where the gains for the adopters are not at a cost for non-adopters, thus increasing the overall wealth of the population and lowering inequality. However, as shown, a self-organised adoption of such policies would require external regulation.

12. Role of Resource Production in Community of People and Robots 

A model of coexistence of people and artificial creatures (robots) under conditions of robot dominance is considered. The mechanism of robot dominance regulates the distribution of the produced resource between people and robots. Two types of the people’s role in resource manufacturing are examined. The first is the interchangeability of people and robots; the second is the people’s indispensability. Different scenarios of civilization evolution including extinction of the human population are described. Conditions that provide humanity with a more or less prosperous future are presented.

13. L’éthique clinique face à la fin du monde annoncée 

The Earth is increasingly hostile towards many living species and uninhabitable in some parts of the world. What is foretold in the coming decades is not the end of the world, but the end of the world as we know it. All over the world, many individuals (scientists, intellectuals, citizens) today believe in the inevitability of a collapse of our civilization and their existence is profoundly disrupted: Can they still plan to start a family? Should they continue their studies, or should they start preparing for survival today? Does existence still have any meaning? In clinical ethics consultations, we are confronted with requests for definitive contraception for environmental reasons that put the medical profession and the very foundations of clinical ethics in difficulty. What answers are legitimate?.

7

0
0

Reactions

0
0

More posts like this

Comments2
Sorted by Click to highlight new comments since: Today at 4:29 PM

Most of the links to the papers seem to be broken.

great catch thanks - fixed!

Curated and popular this week
Relevant opportunities