Hide table of contents

Background

Oxford Biosecurity Group (OBG) conducts global research projects in collaboration with technical and policy organisations, to foster talent and tackle issues to reduce global catastrophic biological risks (GCBRs) and risks at the intersection of AI and biology.  

Each project is led by a project lead, with 3-6 researchers each spending 5-10 hours a week doing research. Past projects include AI-bio, synthetic DNA regulation policy, and lab safety governance, with collaborators such as 1Day Sooner, CEPI, and Center for AI Safety. Researchers and project leads can gain skills, experience and networks in biosecurity, and promising researchers are referred by us for additional support and opportunities. Projects are typically run in ‘project cycles’ of 5 or more projects, which allows for economies of scale for operational and time costs for processes such as recruitment and onboarding, and for easier resource sharing and networking across projects. Since our founding in October 2023, we have run 3 project cycles including our pilot cycle. In the past, projects have been 7-8 weeks long, but are planned to be extended to 10-12 weeks for future project cycles. 

Past Impact

In the two project cycles after our pilot, 51 researchers completed 14 projects run by 12 project leads with 8 collaborating organisations or initiatives. We identified 31 promising participants for further support and opportunities. The next steps of past participants include Bluedot Impact (facilitator, participant and teaching fellow), Existential Risk Laboratory, Oxford Nanopore Technologies, relevant study, founding and directing a non-profit, and continuing with full-time or part-time work with the collaborating organisation. Project outputs have been presented at the International Pandemic Sciences Conference 2024, fed into collaborator strategies, and three projects are planned to feed into academic publications. For example, the results of the project ‘Eliciting Biological Knowledge of AI Models’ were uploaded as a preprint, which has received significant interest from relevant stakeholders and was used by OpenAI in their o1 system card biosecurity threat evaluation (see section 4.5.7 of the system card here).

For the full list of our past project outcomes see here. Our full list of participant testimonials and next steps is not shareable for privacy reasons, but we can give some further details privately upon request. We will release a more detailed impact evaluation in the next few weeks. 

We expect most of our impact to come from identifying promising people, who might not otherwise be identified (e.g. people who are more interested in projects than courses, and those in their mid-career looking for object-level work); and helping them gain skills, experience and connections. We also enable previously identified promising people to gain experience doing biosecurity work, which can help them when applying for future jobs and grants. Direct project impact is hits based and is likely to come from increasing the capacity of relevant organisations, which allows additional valuable work to be done, or to be done sooner. 

Early 2025 Plans and Use of Funding

We plan to run our next main project cycle starting in early 2025, and for this, we will initially run a project lead recruitment round. The aim of this is to attract entrepreneurial, operations and managerial people, which have been identified as a skill gap within biosecurity. We are also scoping out additional projects, including projects specifically targeting more experienced professionals, which we will announce soon.  

To confirm the dates of the project cycle, we are fundraising for a minimum of £13,300/$16,700, which will cover the remaining costs of a bootstrapped version. This is primarily to cover project lead stipends and project-specific costs, e.g. compute for AI-bio projects, and will be used to run additional projects if the full amount is received from outstanding funding applications. 

Donate here

Thank you very much for your support, and happy to answer your questions via comment or direct message. 

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 1m read
 · 
Garrison
 ·  · 7m read
 · 
This is the full text of a post from "The Obsolete Newsletter," a Substack that I write about the intersection of capitalism, geopolitics, and artificial intelligence. I’m a freelance journalist and the author of a forthcoming book called Obsolete: Power, Profit, and the Race to build Machine Superintelligence. Consider subscribing to stay up to date with my work. Wow. The Wall Street Journal just reported that, "a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI." Technically, they can't actually do that, so I'm going to assume that Musk is trying to buy all of the nonprofit's assets, which include governing control over OpenAI's for-profit, as well as all the profits above the company's profit caps. OpenAI CEO Sam Altman already tweeted, "no thank you but we will buy twitter for $9.74 billion if you want." (Musk, for his part, replied with just the word: "Swindler.") Even if Altman were willing, it's not clear if this bid could even go through. It can probably best be understood as an attempt to throw a wrench in OpenAI's ongoing plan to restructure fully into a for-profit company. To complete the transition, OpenAI needs to compensate its nonprofit for the fair market value of what it is giving up. In October, The Information reported that OpenAI was planning to give the nonprofit at least 25 percent of the new company, at the time, worth $37.5 billion. But in late January, the Financial Times reported that the nonprofit might only receive around $30 billion, "but a final price is yet to be determined." That's still a lot of money, but many experts I've spoken with think it drastically undervalues what the nonprofit is giving up. Musk has sued to block OpenAI's conversion, arguing that he would be irreparably harmed if it went through. But while Musk's suit seems unlikely to succeed, his latest gambit might significantly drive up the price OpenAI has to pay. (My guess is that Altman will still ma
 ·  · 5m read
 · 
When we built a calculator to help meat-eaters offset the animal welfare impact of their diet through donations (like carbon offsets), we didn't expect it to become one of our most effective tools for engaging new donors. In this post we explain how it works, why it seems particularly promising for increasing support for farmed animal charities, and what you can do to support this work if you think it’s worthwhile. In the comments I’ll also share our answers to some frequently asked questions and concerns some people have when thinking about the idea of an ‘animal welfare offset’. Background FarmKind is a donation platform whose mission is to support the animal movement by raising funds from the general public for some of the most effective charities working to fix factory farming. When we built our platform, we directionally estimated how much a donation to each of our recommended charities helps animals, to show users.  This also made it possible for us to calculate how much someone would need to donate to do as much good for farmed animals as their diet harms them – like carbon offsetting, but for animal welfare. So we built it. What we didn’t expect was how much something we built as a side project would capture peoples’ imaginations!  What it is and what it isn’t What it is:  * An engaging tool for bringing to life the idea that there are still ways to help farmed animals even if you’re unable/unwilling to go vegetarian/vegan. * A way to help people get a rough sense of how much they might want to give to do an amount of good that’s commensurate with the harm to farmed animals caused by their diet What it isn’t:  * A perfectly accurate crystal ball to determine how much a given individual would need to donate to exactly offset their diet. See the caveats here to understand why you shouldn’t take this (or any other charity impact estimate) literally. All models are wrong but some are useful. * A flashy piece of software (yet!). It was built as