Hide table of contents

There are two main areas of catastrophic or existential risk which have recently received significant attention; biorisk, from natural sources, biological accidents, and biological weapons, and artificial intelligence, from detrimental societal impacts of systems, incautious or intentional misuse of highly capable systems, and direct risks from agentic AGI/ASI. These have been compared extensively in research, and have even directly inspired policies. Comparisons are often useful, but in this case, I think the disanalogies are much more compelling than the analogies. Below, I lay these out piecewise, attempting to keep the pairs of paragraphs describing first biorisk, then AI risk, parallel to each other. 

While I think the disanalogies are compelling, comparison can still be useful as an analytic tool - while keeping in mind that the ability to directly learn lessons from biorisk to apply to AI is limited by the vast array of other disanalogies. (Note that this is not discussing the interaction of these two risks, which is a critical but different topic.)

Comparing the Risk: Attack Surface

Pathogens, whether natural or artificial, have a fairly well-defined attack surface; the hosts’ bodies. Human bodies are pretty much static targets, are the subject of massive research effort, have undergone eons of adaptation to be more or less defensible, and our ability to fight pathogens is increasingly well understood.

Risks from artificial intelligence, on the other hand, have a near unlimited attack surface against humanity, not only including our deeply insecure but increasingly vital computer systems, but also our bodies, our social, justice, political, and governance systems, and our highly complex and interconnected but poorly understood infrastructure and economic systems. Few of these are understood to be robust, and the classes of failures are both manifold, and not adapted or constructed for their resilience to attack.

Comparing the Risk: Mitigation

Avenues to mitigate impacts of pandemics are well explored, and many partially effective systems are in place. Global health, in various ways, is funded with on the order of tens of trillions of dollars yearly, much of which has been at times directly refocused on fighting infectious disease pandemics. Accident risk with pathogens is a major area of focus, and while manifestly insufficient to stop all accidents, decades of effort have greatly reduced the rate of accidents in laboratories working with both clinical and research pathogens. Biological weapons are banned internationally, and breaches of the treaty are both well understood to be unacceptable norm violations, and limited to a few small and unsuccessful attempts in the past decades.

The risks and mitigation paths for AI, both societally and for misuse, are poorly understood and almost entirely theoretical. Recent efforts like the EU AI act have unclear impact. The ecosystem for managing the risks is growing quickly, but at present likely includes no more than a few thousand people, with optimistically a few tens of millions of dollars of annual funding, and has no standards or clarity about how to respond to different challenges. Accidental negative  impacts of current systems, both those poorly vetted or untested, and those which were developed with safety in mind, are more common than not, and the scale of the risk is almost certainly increasing far faster than the response efforts. There are no international laws banning risky or intentional misuse or development of dangerous AI systems, much less norms for caution or against abuse.

Comparing the Risk: Standards

A wide variety of mandatory standards exist for disease reporting, data collection, tracking, and response. The bodies which receive the reports, both at a national and international level, are well known. There are also clear standards for safely working with pathogen agents which are largely effective when followed properly, and weak requirements to follow those standards not only in cases where known dangerous agents are used, but even in cases where danger is speculative - though these are often ignored. While all could be more robust, improvements are on policymakers’ agendas, and in general, researchers agree with following risk-mitigation protocols because it is aligned with their personal safety.

In AI, it is unclear what should be reported, what data should be collected about incidents, and whether firms or users need to report even admittedly worrying incidents. There is no body in place to receive or handle reports. There are no standards in place for developing novel risky AI systems, and the potential safeguards in place are admitted to be insufficient for the types of systems the developers say they are actively trying to create. No requirement to follow these standards exists, and the norms are opposed to doing so. Policymakers are conflicted about whether to put any safeguards in place, and many researchers actively oppose attempts to do so, calling claimed dangers absurd or theoretical.

Conclusion

Attempts to build safety systems are critical, and different domains require different types of systems, different degrees of caution, and different conceptual models which are appropriate to the risks being mitigated. At the same time, disanalogies listed here aren’t in and of themselves reasons that similar strategies cannot sometimes be useful, once the limitations are understood. For that reason, disanalogies should be a reminder and a caution against analogizing, not a reason on its own to reject parallel approaches in the different domains.

Comments4


Sorted by Click to highlight new comments since:

This makes sense to me, good writeup!

Thanks for drawing this line between biorisk and AI risk.

Somewhat related: I often draw parallels between threat models in cyber security and certain biosecurity questions such as DNA synthesis screening. After reading your write-up, these two seem much closer related than biorisk and AI risk and I'd say cyber security is often a helpful analogy for biosecurity in certain contexts. Sometimes biosecurity intersects directly with cyber security, that is when critical information is stored digitally (like DNA sequences of concern). Would be interested in your opinion.

I think there are useful analogies between specific aspects of bio, cyber, and AI risks, and it's certainly the case that when the biorisk is based on information security, it's very similar to cybersecurity, not the least in that it requires cybersecurity! And the same is true for AI risk; to the extent that there is a risk of model weights leaking, this is in part a cybersecurity issue.

So yes, I certainly agree that many of the dissimilarities with AI are not present if analogizing to cyber. However, more generally, I'm not sure cybersecurity is a good analogy for biorisk, and have heard that computer security people often dislike the comparison of computer viruses and biological viruses for that reason, though they certainly share some features.

Executive summary: Despite frequent comparisons between biorisk and AI risk, the disanalogies between these two areas of catastrophic or existential risk are much more compelling than the analogies.

Key points:

  1. Pathogens have a well-defined attack surface (human bodies), while AI risks have a nearly unlimited attack surface, including computer systems, infrastructure, and social and economic systems.
  2. Mitigation efforts for pandemics are well-funded and established, with international treaties and norms, while AI risk mitigation is poorly understood, underfunded, and lacks clear standards or laws.
  3. Disease reporting and data collection standards exist for biorisk, along with protocols for safely working with pathogens, while AI lacks reporting standards, a central body to handle reports, or requirements to follow safety standards.
  4. Despite the disanalogies, comparing biorisk and AI risk can still be a useful analytic tool, as long as the limitations of direct comparisons are understood.

 

 

This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.

Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
 ·  · 15m read
 · 
In our recent strategy retreat, the GWWC Leadership Team recognised that by spreading our limited resources across too many projects, we are unable to deliver the level of excellence and impact that our mission demands. True to our value of being mission accountable, we've therefore made the difficult but necessary decision to discontinue a total of 10 initiatives. By focusing our energy on fewer, more strategically aligned initiatives, we think we’ll be more likely to ultimately achieve our Big Hairy Audacious Goal of 1 million pledgers donating $3B USD to high-impact charities annually. (See our 2025 strategy.) We’d like to be transparent about the choices we made, both to hold ourselves accountable and so other organisations can take the gaps we leave into account when planning their work. As such, this post aims to: * Inform the broader EA community about changes to projects & highlight opportunities to carry these projects forward * Provide timelines for project transitions * Explain our rationale for discontinuing certain initiatives What’s changing  We've identified 10 initiatives[1] to wind down or transition. These are: * GWWC Canada * Effective Altruism Australia funding partnership * GWWC Groups * Giving Games * Charity Elections * Effective Giving Meta evaluation and grantmaking * The Donor Lottery * Translations * Hosted Funds * New licensing of the GWWC brand  Each of these is detailed in the sections below, with timelines and transition plans where applicable. How this is relevant to you  We still believe in the impact potential of many of these projects. Our decision doesn’t necessarily reflect their lack of value, but rather our need to focus at this juncture of GWWC's development.  Thus, we are actively looking for organisations and individuals interested in taking on some of these projects. If that’s you, please do reach out: see each project's section for specific contact details. Thank you for your continued support as we
 ·  · 3m read
 · 
We are excited to share a summary of our 2025 strategy, which builds on our work in 2024 and provides a vision through 2027 and beyond! Background Giving What We Can (GWWC) is working towards a world without preventable suffering or existential risk, where everyone is able to flourish. We do this by making giving effectively and significantly a cultural norm. Focus on pledges Based on our last impact evaluation[1], we have made our pledges –  and in particular the 🔸10% Pledge – the core focus of GWWC’s work.[2] We know the 🔸10% Pledge is a powerful institution, as we’ve seen almost 10,000 people take it and give nearly $50M USD to high-impact charities annually. We believe it could become a norm among at least the richest 1% — and likely a much wider segment of the population — which would cumulatively direct an enormous quantity of financial resources towards tackling the world’s most pressing problems.  We initiated this focus on pledges in early 2024, and are doubling down on it in 2025. In line with this, we are retiring various other initiatives we were previously running and which are not consistent with our new strategy. Introducing our BHAG We are setting ourselves a long-term Big Hairy Audacious Goal (BHAG) of 1 million pledgers donating $3B USD to high-impact charities annually, which we will start working towards in 2025. 1 million pledgers donating $3B USD to high-impact charities annually would be roughly equivalent to ~100x GWWC’s current scale, and could be achieved by 1% of the world’s richest 1% pledging and giving effectively. Achieving this would imply the equivalent of nearly 1 million lives being saved[3] every year. See the BHAG FAQ for more info. Working towards our BHAG Over the coming years, we expect to test various growth pathways and interventions that could get us to our BHAG, including digital marketing, partnerships with aligned organisations, community advocacy, media/PR, and direct outreach to potential pledgers. We thin