Thanks to Michael Bryan, Mike McLaren, Simon Grimm, and many folks at the NAO for discussion that led to this tool and feedback on its implementation and UI.

The NAO works on identifying potential pandemics sooner, and most of our work so far has been on wastewater. In some ways wastewater is good—a single sample covers hundreds of thousands of people—but in other ways it’s not—sewage isn’t the ideal place to look for many kinds of viruses, especially respiratory ones.

We’ve been thinking a lot about other sample types, like nasal swabs, which have the opposite tradeoffs from wastewater: a single sample is just one person, but it’s a great place to look for respiratory viruses. But maybe then your constraint changes from whether you’re sequencing deeply enough to see the pathogen to whether you’re sampling enough people to include someone while they’re actively shedding the virus.

The interplay between these constraints was complicated enough that we decided to write a simulator, which then offered an opportunity to pull together a lot of other things we’ve been thinking about like the effect of delay, the short insert lengths you get with wastewater sequencing, cost estimates for different sequencing approaches, and the relative abundance of pathogens after controlling for incidence. We now have this in a form that seems worth sharing publicly: data.securebio.org/simulator.

The simulation is only as good as its inputs, and some of the inputs are pretty rough, but here’s an example. Let’s say we’re willing to spend ~$1M/y on a detection system looking for blatantly genetically engineered variants of any known human-infecting RNA virus. We’re considering two approaches:

  • Very deep weekly short-read wastewater sequencing. The main cost is the sequencing.
  • Shallower daily long-read nasal swab sequencing. The main cost is collecting the nasal swabs.

We’d like to know how many people would be infected (“cumulative incidence”) before the system can raise the alarm.

If the virus happens to shed somewhat like SARS-CoV-2, here’s what the simulator gives us:

A higher cumulative incidence at detection means more people have been infected, so on this chart lower is better. Given the inputs we used, the simulator projects Nanopore sequencing on nasal swabs would have about twice the sensitivity as Illumina sequencing on wastewater. It also projects that the difference is larger at the low percentiles: when sequencing swabs there’s a chance someone you swab early on has it, while with wastewater’s far larger per-sample population early cases will likely be lost through dilution. You can explore this scenario and see the specific parameter settings here.

Instead, if it sheds like influenza, which we estimate is ~4x less abundant in wastewater for a given incidence, it gives:

This makes sense: if the pathogen sheds less in wastewater the system will be less sensitive for a given amount of sequencing.

On one hand, please don’t take the simulator results too seriously: there are a lot of inputs we only know to an order of magnitude, and there are major ways it could be wrong. On the other hand, however, Jeff adds that it does represent his current best guesses on all the parameters, and he does rely on the output heavily in prioritizing his work.1

If you see any weird results when playing with it, let us know! Whether they’re bugs or just non-intuitive outputs, that’s interesting either way.

Comments7


Sorted by Click to highlight new comments since:

What’s your theory for why the status quo tends to be wastewater?

For qPCR or other targeted detection approaches wastewater has quickly become a very common sample type, mostly because (a) it was very successful for covid, (b) a single sample covers hundreds of thousands of people, and (c) it's an 'environmental' sample so it's easy to get started (no IRB etc). And targeted detection is generally sensitive enough that the low concentrations are surmountable.

There isn't really a status quo for metagenomic monitoring: everything is currently in its early stages. There are academics collecting a range of samples and metagenomically sequencing them, but these don't feed into public health tracking, partly because they're not running their sequencing or analysis in a way that would give the low sample-to-results times you'd need from a real-time monitoring system.

Nice, good idea and well implemented!

In terms of wastewater being good for getting samples from lots of people at once and not needing ethics clearance, but being worse for respiratory pathogens, how feasible is airborne environmental DNA sampling? I have never looked into it, I just remember hearing someone give a talk about their work on this, I think related to this paper: https://www.sciencedirect.com/science/article/pii/S096098222101650X

I assume it is just hard to get the quantity of nucleic acids we would want from the air.

Flagging this for @Conrad K. - this seems like perhaps a better version of what you were considering building last year I think? If you have time you might have useful thoughts/suggestions.

I played around with the simulator a bit but didn't find anything too counterintuitive. I noticed various minor suboptimal things, depending on what you want to do with the simulator some of these may not be worth changing:

  • I found having many values in relative abundance box for nasal swabs a bit confusing and harder to manage as a user. Why not just specify a distribution with some parameters rather than list lots of possible values drawn from that distribution?
  • The line is not monotonic as it should be here, seemingly because the simulation hits 30% of the population and then stops. Maybe rather than have the line go back to 0, just stop it when it hits 30%, or have it plateau at 30%?
  • There were some issues with the sizing of the graph for me. I am using Chrome on Windows 11. At 100% zoom part of the x-axis label and the y-axis numbers are cut off:

    And the problem becomes worse if for whatever reason you run lots of scenarios, where the whole bottom half of the graph disappears:

Thanks for the feedback!

Why not just specify a distribution with some parameters rather than list lots of possible values drawn from that distribution?

The values in the list aren't drawn from a parametrized distribution, they're the observed values in a small study.

Maybe rather than have the line go back to 0, just stop it when it hits 30%

Done!

the y-axis numbers are cut off

Fixed!

if for whatever reason you run lots of scenarios, where the whole bottom half of the graph disappears

This was due to me not testing on monitors that had that aspect ratio. Whoops! Fixed by allowing you to scroll that section.

We've done a fairly thorough investigation into air sampling as an alternative to wastewater at the NAO. We currently have a preprint on the topic here and a much more in-depth draft we hope to publish soon. 

Oh nice, I hadn't seen that one, thanks!

Thanks for the tag @OscarD, this is awesome! I'd basically hoped to build this but then additionally convert incidence at detection to some measure of expected value based on the detection architecture (e.g. as economic gains or QALYs). Something way too ambitious for me at the time haha, but I am still thinking about this.

I definitely want to play with this in way more detail and look into how it's coded, will try and get back with hopefully helpful feedback here.

Curated and popular this week
TL;DR * Screwworm Free Future is a new group seeking support to advance work on eradicating the New World Screwworm in South America. * The New World Screwworm (C. hominivorax - literally "man-eater") causes extreme suffering to hundreds of millions of wild and domestic animals every year. * To date we’ve held private meetings with government officials, experts from the private sector, academics, and animal advocates. We believe that work on the NWS is valuable and we want to continue our research and begin lobbying. * Our analysis suggests we could prevent about 100 animals from experiencing an excruciating death per dollar donated, though this estimate has extreme uncertainty. * The screwworm “wall” in Panama has recently been breached, creating both an urgent need and an opportunity to address this problem. * We are seeking $15,000 to fund a part-time lead and could absorb up to $100,000 to build a full-time team, which would include a team lead and another full-time equivalent (FTE) role * We're also excited to speak to people who have a background in veterinary science/medicine, entomology, gene drives, as well as policy experts in Latin America. - please reach out if you know someone who fits this description!   Cochliomyia hominivorax delenda est Screwworm Free Future is a new group of volunteers who connected through Hive investigating the political and scientific barriers stopping South American governments from eradicating the New World Screwworm. In our shallow investigation, we have identified key bottlenecks, but we now need funding and people to take this investigation further, and begin lobbying. In this post, we will cover the following: * The current status of screwworms * Things that we have learnt in our research * What we want to do next * How you can help by funding or supporting or project   What’s the deal with the New World Screwworm? The New World Screwworm[1] is the leading cause of myiasis in Latin America. Myiasis “
 ·  · 11m read
 · 
Does a food carbon tax increase animal deaths and/or the total time of suffering of cows, pigs, chickens, and fish? Theoretically, this is possible, as a carbon tax could lead consumers to substitute, for example, beef with chicken. However, this is not per se the case, as animal products are not perfect substitutes.  I'm presenting the results of my master's thesis in Environmental Economics, which I re-worked and published on SSRN as a pre-print. My thesis develops a model of animal product substitution after a carbon tax, slaughter tax, and a meat tax. When I calibrate[1] this model for the U.S., there is a decrease in animal deaths and duration of suffering following a carbon tax. This suggests that a carbon tax can reduce animal suffering. Key points * Some animal products are carbon-intensive, like beef, but causes relatively few animal deaths or total time of suffering because the animals are large. Other animal products, like chicken, causes relatively many animal deaths or total time of suffering because the animals are small, but cause relatively low greenhouse gas emissions. * A carbon tax will make some animal products, like beef, much more expensive. As a result, people may buy more chicken. This would increase animal suffering, assuming that farm animals suffer. However, this is not per se the case. It is also possible that the direct negative effect of a carbon tax on chicken consumption is stronger than the indirect (positive) substitution effect from carbon-intensive products to chicken. * I developed a non-linear market model to predict the consumption of different animal products after a tax, based on own-price and cross-price elasticities. * When calibrated for the United States, this model predicts a decrease in the consumption of all animal products considered (beef, chicken, pork, and farmed fish). Therefore, the modelled carbon tax is actually good for animal welfare, assuming that animals live net-negative lives. * A slaughter tax (a
 ·  · 4m read
 · 
As 2024 draws to a close, I’m reflecting on the work and stories that inspired me this year: those from the effective altruism community, those I found out about through EA-related channels, and those otherwise related to EA. I’ve appreciated the celebration of wins and successes over the past few years from @Shakeel Hashim's posts in 2022 and 2023. As @Lizka and @MaxDalton put very well in a post in 2022: > We often have high standards in effective altruism. This seems absolutely right: our work matters, so we must constantly strive to do better. > > But we think that it's really important that the effective altruism community celebrate successes: > > * If we focus too much on failures, we incentivize others/ourselves to minimize the risk of failure, and we will probably be too risk averse. > * We're humans: we're more motivated if we celebrate things that have gone well. Rather than attempting to write a comprehensive review of this year's successes and wins related to EA, I want to share what has personally moved me this year—progress that gave me hope, individual stories and acts of altruism, and work that I found thought-provoking or valuable. I’ve structured the sections below as prompts to invite your own reflection on the year, as I’d love to hear your responses in the comments. We all have different relationships with EA ideas and the community surrounding them, and I find it valuable that we can bring different perspectives and responses to questions like these. What progress in the world did you find exciting? * The launch of the Lead Exposure Elimination Fund this year was exciting to see, and the launch of the Partnership for a Lead-Free Future. The fund jointly committed over $100 million to combat lead exposure, compared to the $15 million in private funding that went toward lead exposure reduction in 2023. It’s encouraging to see lead poisoning receiving attention and funding after being relatively neglected. * The Open Wing Alliance repor