Hide table of contents

The 7th edition of the Next Generation for Biosecurity Competition was recently opened for applications. This is an annual competition hosted by the Nuclear Threat Initiative in partnership with the Next Generation for Global Health Security Network (NextGen), the iGEM Foundation80,000 HoursSynBio Africa, and the Global Health Security Network (GHSN).

We took part in this competition last year and were fortunate to have our work recognised as the winning entry. Wonderful work by another team was also recognised as an outstanding submission. We think this is a good opportunity for students and early-career professionals interested in biosecurity to (i) forge strong networks with each other and with experts, (ii) develop a focused piece of work that can be further expanded upon, and (iii) push the envelope on how we approach the growing bioeconomy. We agree with Joshua Monrad’s reasons for participating in the competition, listed in his forum post last year.

The full prompt, eligibility, and submission requirements for the competition can be found here. In summary,

  • The competition calls for applicants to design a policy proposal that promotes biosecurity-by-design as a way to bolster emerging bioeconomies. 
  • Teams must have three participants and include members from two or more countries and/or regions, with representation from different fields strongly encouraged. 
  • Applicants must be currently enrolled in an academic institution or have less than five years of professional experience.
  • The submission deadline is September 11 2023, 11:59PM ET.
  • Winners will have their paper published on the NTI website, and receive a sponsored opportunity to attend and present their work at a prestigious international biosecurity event.
  • There will be an informational webinar on July 12 2023, 9:00AM ET for interested applicants that includes a discussion on the topic and an opportunity to form teams with other applicants.

Our experience

  • How did we form our group?

The competition website lists several ways to link up with other applicants. We did not know each other beforehand but we connected over social media, and were fortunate that we had a balanced set of skills and perspectives. 

  • How much time did we spend on the competition? 

Last year’s competition had tighter time constraints, so we roughly had weekly hour-long meetings for 4-5 weeks and ~2-3 hours outside the meetings to prepare. There seems to be more time until the deadline this year and so less need for intense work.

Advice

  • Be open to broad expertise - from biotech and beyond

We found our meetings especially meaningful because we had different technical and cultural backgrounds, which led to multiple perspectives being discussed. This is also an opportunity to work with others outside the EA community, and potentially be their first interaction with the EA community!

  • Broaden your contextual awareness

When thinking of issues and/or solutions in biosecurity, there is a tendency to default towards popular topics which are often resource-intensive. It is important to be aware of how resource constraints can limit the feasibility and acceptability of these policy proposals. One way to build this awareness (as early career individuals without a seasoned perspective) is to engage with experts and stakeholders who are familiar with low-resource settings. This is especially important for this year’s prompt.

  • Strike a balance between grounded and ambitious proposals

It’s easy to get carried away with proposing ambitious policy ideas, but it is equally important to ensure that these ideas are realistic and achievable. Striking a balance between incremental progress and ambitious policy ideas while framing recommendations will not just make the proposal(s) more palatable, but also serve as a useful exercise to test whether you enjoy straddling these considerations. Interacting with experts will help a lot in ensuring the proposals are grounded in reality. We encourage using those considerations to guide brainstorming. 

  • Study and acknowledge existing work

Many ideas and arguments that might initially seem novel most likely have been proposed in the past (perhaps under different framing). Doing your due diligence and acknowledging past work will make your proposals more robust and ensure that you are not rehashing points but adding to the conversation.

Overall, the competition really did help us feel like our ideas were part of the next generation of biosecurity. Participating helped us refine our understanding of past issues, incentives, and constraints in order to effectively prioritise action that will work in the real world in a way that doesn’t come naturally from just engaging with articles, books, webinars and posts. We hope you find similar values out of the process. Good luck to all applicants!

Note: We thank Gabby Essix for feedback on this post.

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 20m read
 · 
Advanced AI could unlock an era of enlightened and competent government action. But without smart, active investment, we’ll squander that opportunity and barrel blindly into danger. Executive summary See also a summary on Twitter / X. The US federal government is falling behind the private sector on AI adoption. As AI improves, a growing gap would leave the government unable to effectively respond to AI-driven existential challenges and threaten the legitimacy of its democratic institutions. A dual imperative → Government adoption of AI can’t wait. Making steady progress is critical to: * Boost the government’s capacity to effectively respond to AI-driven existential challenges * Help democratic oversight keep up with the technological power of other groups * Defuse the risk of rushed AI adoption in a crisis → But hasty AI adoption could backfire. Without care, integration of AI could: * Be exploited, subverting independent government action * Lead to unsafe deployment of AI systems * Accelerate arms races or compress safety research timelines Summary of the recommendations 1. Work with the US federal government to help it effectively adopt AI Simplistic “pro-security” or “pro-speed” attitudes miss the point. Both are important — and many interventions would help with both. We should: * Invest in win-win measures that both facilitate adoption and reduce the risks involved, e.g.: * Build technical expertise within government (invest in AI and technical talent, ensure NIST is well resourced) * Streamline procurement processes for AI products and related tech (like cloud services) * Modernize the government’s digital infrastructure and data management practices * Prioritize high-leverage interventions that have strong adoption-boosting benefits with minor security costs or vice versa, e.g.: * On the security side: investing in cyber security, pre-deployment testing of AI in high-stakes areas, and advancing research on mitigating the ris
saulius
 ·  · 22m read
 · 
Summary In this article, I estimate the cost-effectiveness of five Anima International programs in Poland: improving cage-free and broiler welfare, blocking new factory farms, banning fur farming, and encouraging retailers to sell more plant-based protein. I estimate that together, these programs help roughly 136 animals—or 32 years of farmed animal life—per dollar spent. Animal years affected per dollar spent was within an order of magnitude for all five evaluated interventions. I also tried to estimate how much suffering each program alleviates. Using SADs (Suffering-Adjusted Days)—a metric developed by Ambitious Impact (AIM) that accounts for species differences and pain intensity—Anima’s programs appear highly cost-effective, even compared to charities recommended by Animal Charity Evaluators. However, I also ran a small informal survey to understand how people intuitively weigh different categories of pain defined by the Welfare Footprint Institute. The results suggested that SADs may heavily underweight brief but intense suffering. Based on those findings, I created my own metric DCDE (Disabling Chicken Day Equivalent) with different weightings. Under this approach, interventions focused on humane slaughter look more promising, while cage-free campaigns appear less impactful. These results are highly uncertain but show how sensitive conclusions are to how we value different kinds of suffering. My estimates are highly speculative, often relying on subjective judgments from Anima International staff regarding factors such as the likelihood of success for various interventions. This introduces potential bias. Another major source of uncertainty is how long the effects of reforms will last if achieved. To address this, I developed a methodology to estimate impact duration for chicken welfare campaigns. However, I’m essentially guessing when it comes to how long the impact of farm-blocking or fur bans might last—there’s just too much uncertainty. Background In
 ·  · 2m read
 · 
In my opinion, we have known that the risk of AI catastrophe is too high and too close for at least two years. At that point, it’s time to work on solutions (in my case, advocating an indefinite pause on frontier model development until it’s safe to proceed through protests and lobbying as leader of PauseAI US).  Not every policy proposal is as robust to timeline length as PauseAI. It can be totally worth it to make a quality timeline estimate, both to inform your own work and as a tool for outreach (like ai-2027.com). But most of these timeline updates simply are not decision-relevant if you have a strong intervention. If your intervention is so fragile and contingent that every little update to timeline forecasts matters, it’s probably too finicky to be working on in the first place.  I think people are psychologically drawn to discussing timelines all the time so that they can have the “right” answer and because it feels like a game, not because it really matters the day and the hour of… what are these timelines even leading up to anymore? They used to be to “AGI”, but (in my opinion) we’re basically already there. Point of no return? Some level of superintelligence? It’s telling that they are almost never measured in terms of actions we can take or opportunities for intervention. Indeed, it’s not really the purpose of timelines to help us to act. I see people make bad updates on them all the time. I see people give up projects that have a chance of working but might not reach their peak returns until 2029 to spend a few precious months looking for a faster project that is, not surprisingly, also worse (or else why weren’t they doing it already?) and probably even lower EV over the same time period! For some reason, people tend to think they have to have their work completed by the “end” of the (median) timeline or else it won’t count, rather than seeing their impact as the integral over the entire project that does fall within the median timeline estimate or