Hide table of contents

Each month, The Existential Risk Research Assessment (TERRA) uses a unique machine-learning model to predict those publications most relevant to existential risk or global catastrophic risk. The following are a selection of those papers identified this month.

Please note that we provide these citations and abstracts as a service to aid other researchers in paper discovery and that inclusion does not represent any kind of endorsement of this research by the Centre for the Study of Existential Risk or our researchers.

1. Existential Security: Towards a Security Framework for the Survival of Humanity

Humankind faces a growing spectrum of anthropogenic existential threats to human civilization and survival. This article therefore aims to develop a new framework for security policy – ‘existential security’ – that puts the survival of humanity at its core. It begins with a discussion of the definition and spectrum of ‘anthropogenic existential threats’, or those threats that have their origins in human agency and could cause, minimally, civilizational collapse, or maximally, human extinction. It argues that anthropogenic existential threats should be conceptualized as a matter of ‘security’, which follows a logic of protection from threats to the survival of some referent object. However, the existing frameworks for security policy – ‘human security’ and ‘national security’ – have serious limitations for addressing anthropogenic existential threats; application of the ‘national security’ frame could even exacerbate existential threats to humanity. Thus, the existential security frame is developed as an alternative for security policy, which takes ‘humankind’ as its referent object against anthropogenic existential threats to human civilization and survival.

2. The impact of intelligent cyber-physical systems on the decarbonization of energy

The decarbonisation of energy provision is key to managing global greenhouse gas emissions and hence mitigating climate change. Digital technologies such as big data, machine learning, and the Internet of Things are receiving more and more attention as they can aid the decarbonisation process while requiring limited investments. The orchestration of these novel technologies, so-called cyber-physical systems (CPS), provides further, synergetic effects that increase efficiency of energy provision and industrial production, thereby optimising economic feasibility and environmental impact. This comprehensive review article assesses the current as well as the potential impact of digital technologies within CPS on the decarbonisation of energy systems. Ad hoc calculation for selected applications of CPS and its subsystems estimates not only the economic impact but also the emission reduction potential. This assessment clearly shows that digitalisation of energy systems using CPS completely alters the marginal abatement cost curve (MACC) and creates novel pathways for the transition to a low-carbon energy system. Moreover, the assessment concludes that when CPS are combined with artificial intelligence (AI), decarbonisation could potentially progress at an unforeseeable pace while introducing unpredictable and potentially existential risks. Therefore, the impact of intelligent CPS on systemic resilience and energy security is discussed and policy recommendations are deducted. The assessment shows that the potential benefits clearly outweigh the latent risks as long as these are managed by policy makers.

3. Effective altruism as an ethical lens on research priorities

Effective altruism is an ethical framework for identifying the greatest potential benefits from investments. Here, we apply effective altruism concepts to maximize research benefits through identification of priority stakeholders, pathosystems, and research questions and technologies. Priority stakeholders for research benefits may include smallholder farmers who have not yet attained the minimal standards set out by the United Nations Sustainable Development Goals; these farmers would often have the most to gain from better crop disease management, if their management problems are tractable. In wildlands, prioritization has been based on the risk of extirpating keystone species, protecting ecosystem services, and preserving wild resources of importance to vulnerable people. Pathosystems may be prioritized based on yield and quality loss, and also factors such as whether other researchers would be unlikely to replace the research efforts if efforts were withdrawn, such as in the case of orphan crops and orphan pathosystems. Research products that help build sustainable and resilient systems can be particularly beneficial. The "value of information" from research can be evaluated in epidemic networks and landscapes, to identify priority locations for both benefits to individuals and to constrain regional epidemics. As decision-making becomes more consolidated and more networked in digital agricultural systems, the range of ethical considerations expands. Low-likelihood but high-damage scenarios such as generalist doomsday pathogens may be research priorities because of the extreme potential cost. Regional microbiomes constitute a commons, and avoiding the "tragedy of the microbiome commons" may depend on shifting research products from "common pool goods" to "public goods" or other categories. We provide suggestions for how individual researchers and funders may make altruism-driven research more effective.

4. Travel, Diarrhea, Antibiotics, Antimicrobial Resistance and Practice Guidelines—a Holistic Approach to a Health Conundrum

Given the recent interest in the interface between travelers’ diarrhea (TD) management guidelines and antimicrobial resistance from both the patient and population perspectives, we have undertaken a review of the evidence with an aim towards practical and pragmatic recommendations. Recent Findings: Antimicrobial resistance continues to be an existential threat. Therefore, an update is needed on the pathogens that cause TD, the role of antibiotics, the potential changes in microbiome and acquisiton of multi-drug resistant (MDR) bacteria, and lastly the impact of MDR acquisition of the traveler on individual, community, and global health through a holistic framework. Summary: Important research gaps and opportunities in this area are identified, as well as practical guidance for the travel medicine community offered.

5. Beyond near- and long-term: Towards a clearer account of research priorities in AI ethics and society

One way of carving up the broad 'AI ethics and society' research space that has emerged in recent years is to distinguish between 'near-term' and 'long-term' research. While such ways of breaking down the research space can be useful, we put forward several concerns about the near/long-term distinction gaining too much prominence in how research questions and priorities are framed. We highlight some ambiguities and inconsistencies in how the distinction is used, and argue that while there are differing priorities within this broad research community, these differences are not well-captured by the near/long-term distinction. We unpack the near/long-term distinction into four different dimensions, and propose some ways that researchers can communicate more clearly about their work and priorities using these dimensions.We suggest that moving towards a more nuanced conversation about research priorities can help establish new opportunities for collaboration, aid the development of more consistent and coherent research agendas, and enable identification of previously neglected research areas.

23

0
0

Reactions

0
0

More posts like this

Comments1


Sorted by Click to highlight new comments since:

Thanks for posting this.

Just thought I'd mention that I found the fifth paper listed quite interesting, and that summaries of and commentary on it can be found from Rohin Shah here and from me here.

Curated and popular this week
LintzA
 ·  · 15m read
 · 
Cross-posted to Lesswrong Introduction Several developments over the past few months should cause you to re-evaluate what you are doing. These include: 1. Updates toward short timelines 2. The Trump presidency 3. The o1 (inference-time compute scaling) paradigm 4. Deepseek 5. Stargate/AI datacenter spending 6. Increased internal deployment 7. Absence of AI x-risk/safety considerations in mainstream AI discourse Taken together, these are enough to render many existing AI governance strategies obsolete (and probably some technical safety strategies too). There's a good chance we're entering crunch time and that should absolutely affect your theory of change and what you plan to work on. In this piece I try to give a quick summary of these developments and think through the broader implications these have for AI safety. At the end of the piece I give some quick initial thoughts on how these developments affect what safety-concerned folks should be prioritizing. These are early days and I expect many of my takes will shift, look forward to discussing in the comments!  Implications of recent developments Updates toward short timelines There’s general agreement that timelines are likely to be far shorter than most expected. Both Sam Altman and Dario Amodei have recently said they expect AGI within the next 3 years. Anecdotally, nearly everyone I know or have heard of who was expecting longer timelines has updated significantly toward short timelines (<5 years). E.g. Ajeya’s median estimate is that 99% of fully-remote jobs will be automatable in roughly 6-8 years, 5+ years earlier than her 2023 estimate. On a quick look, prediction markets seem to have shifted to short timelines (e.g. Metaculus[1] & Manifold appear to have roughly 2030 median timelines to AGI, though haven’t moved dramatically in recent months). We’ve consistently seen performance on benchmarks far exceed what most predicted. Most recently, Epoch was surprised to see OpenAI’s o3 model achi
Dr Kassim
 ·  · 4m read
 · 
Hey everyone, I’ve been going through the EA Introductory Program, and I have to admit some of these ideas make sense, but others leave me with more questions than answers. I’m trying to wrap my head around certain core EA principles, and the more I think about them, the more I wonder: Am I misunderstanding, or are there blind spots in EA’s approach? I’d really love to hear what others think. Maybe you can help me clarify some of my doubts. Or maybe you share the same reservations? Let’s talk. Cause Prioritization. Does It Ignore Political and Social Reality? EA focuses on doing the most good per dollar, which makes sense in theory. But does it hold up when you apply it to real world contexts especially in countries like Uganda? Take malaria prevention. It’s a top EA cause because it’s highly cost effective $5,000 can save a life through bed nets (GiveWell, 2023). But what happens when government corruption or instability disrupts these programs? The Global Fund scandal in Uganda saw $1.6 million in malaria aid mismanaged (Global Fund Audit Report, 2016). If money isn’t reaching the people it’s meant to help, is it really the best use of resources? And what about leadership changes? Policies shift unpredictably here. A national animal welfare initiative I supported lost momentum when political priorities changed. How does EA factor in these uncertainties when prioritizing causes? It feels like EA assumes a stable world where money always achieves the intended impact. But what if that’s not the world we live in? Long termism. A Luxury When the Present Is in Crisis? I get why long termists argue that future people matter. But should we really prioritize them over people suffering today? Long termism tells us that existential risks like AI could wipe out trillions of future lives. But in Uganda, we’re losing lives now—1,500+ die from rabies annually (WHO, 2021), and 41% of children suffer from stunting due to malnutrition (UNICEF, 2022). These are preventable d
Rory Fenton
 ·  · 6m read
 · 
Cross-posted from my blog. Contrary to my carefully crafted brand as a weak nerd, I go to a local CrossFit gym a few times a week. Every year, the gym raises funds for a scholarship for teens from lower-income families to attend their summer camp program. I don’t know how many Crossfit-interested low-income teens there are in my small town, but I’ll guess there are perhaps 2 of them who would benefit from the scholarship. After all, CrossFit is pretty niche, and the town is small. Helping youngsters get swole in the Pacific Northwest is not exactly as cost-effective as preventing malaria in Malawi. But I notice I feel drawn to supporting the scholarship anyway. Every time it pops in my head I think, “My money could fully solve this problem”. The camp only costs a few hundred dollars per kid and if there are just 2 kids who need support, I could give $500 and there would no longer be teenagers in my town who want to go to a CrossFit summer camp but can’t. Thanks to me, the hero, this problem would be entirely solved. 100%. That is not how most nonprofit work feels to me. You are only ever making small dents in important problems I want to work on big problems. Global poverty. Malaria. Everyone not suddenly dying. But if I’m honest, what I really want is to solve those problems. Me, personally, solve them. This is a continued source of frustration and sadness because I absolutely cannot solve those problems. Consider what else my $500 CrossFit scholarship might do: * I want to save lives, and USAID suddenly stops giving $7 billion a year to PEPFAR. So I give $500 to the Rapid Response Fund. My donation solves 0.000001% of the problem and I feel like I have failed. * I want to solve climate change, and getting to net zero will require stopping or removing emissions of 1,500 billion tons of carbon dioxide. I give $500 to a policy nonprofit that reduces emissions, in expectation, by 50 tons. My donation solves 0.000000003% of the problem and I feel like I have f