SO

Sebastian_Oehm

123 karmaJoined Jun 2017

Bio

Syn bio PhD candidate at the University of Cambridge / MRC Laboratory of Molecular Biology.

Comments
7

FYI in-ovo sexing is currently done on country scale in both Germany and France, both have completely banned chick culling since the start of this year. Germany is also set to ban discarding eggs after 6 days of incubation by 2024.

Hey David, thanks for the post, always healthy to hear ideas about what not to do. I have a much more positive view of the promise and importance of antivirals for future pandemics, broadly for the following reasons.

Biological diversity & over-updating from one disease

COVID-19 vaccines have been exceptionally successful, in fact surprisingly effective to the expert community. It appears that COVID-19 is a disease that is (1) sufficiently immunogenic to elicit strong and lasting immunity, (2) was readily adaptable to the new vaccine platforms, thanks to prior research with SARS-1, and (3) shows sufficiently low antigenic variation that vaccines remained effective so far (thanks to its for respiratory RNA viruses unusually low mutation rate).

These properties do not hold for all viruses. For example, for the HIV pandemic vaccine development has been very unsuccessful, and antiviral development highly successful.

One under-appreciated theme in biology is that living systems can show unexpected behavior, and observations from one example often do not generalise. The success of COVID vaccines implies that vaccines (if we can speed up clinical testing and manufacturing) can be a powerful pandemic defense. It does not, in my view, imply that they will be a sufficient defense against most or all possible threats.

Future promise of antivirals vs current performance

The absence of success stories to date is not evidence that a promising technology under development will not be successful - this is the nature of tech development. (E.g. mRNA vaccines didn't have such a success story until COVID, and we shouldn't have stopped developing mRNA vaccines in, say, 2015 because of the absence of successes.)

From a 'first-principles' look at the challenges of antivirals (as a biologist but without drug dev expertise), I am pretty excited about foundational research to accelerate their development.

a) Scale: the 'big win' would be sets of antibiotic-like therapeutics that wipe out a majority of viral pandemic risk; a close second 'platform therapeutics' to be made in response to new pathogens that can be deployed like vaccines

b) Tractability: several promising approaches are discussed in the literature - in brief (i) host-directed drugs that target human cell pathways that viruses need to replicate, (ii) virus-specific drugs (e.g. polymerases that are distinct from human polymerases, see HIV drugs) and (iii) drugs that tune the immune system in response to an infection (e.g. dexamethasone). No doubt it's much harder than antibiotics, but we have made much progress in bio and drug dev since the first antibiotic has been discovered in the 1920s too.

c) Neglectedness: infectious disease is - with the notable exception of HIV - not a problem in western countries. For this reason, antiviral research (like antibiotic research) has received much less attention and funding than other diseases (cancer, gene therapy, neurodegenerative diseases, ...). mRNA vaccines had the advantage that mRNA technology may also be used for cancer and other diseases, so I find it likely that antivirals are especially under-invested in the portfolio of medical countermeasures.

Portfolio theory and scientific innovation

In foundational (bio)tech development, I am pessimistic about our ability to 'pick the winners' at a high level. The history of biomedical research is full of examples of promising technologies that never succeeded, and others that were unexpectedly successful. A 'split-the-money' approach of diversification, in principle, will always be required at such success rate, though I grant that working out the relative % of investments is very hard.

I'm not convinced that academia is generally a bad place to do useful technical work. In the simplest case, you have the choice between working in academia, industry or a non-profit research org. All three have specific incentives and constraints (academia - fit to mainstream academic research taste; industry - commercial viability; non-profit research - funder fit, funding stability and hiring). Among these, academia seems uniquely well-suited to work on big problems with a long (10-20 year) time horizon, while having access to extensive expertise and collaborators (from colleagues in related fields), EA and non-EA funding, and EA and non-EA hires.

For my field of interest (longtermist biorisk), it appears that many of the key past innovations that help e.g. with COVID now come from academic research (e.g. next-generation sequencing, nanopore sequencing, PCR and rapid tests, mRNA vaccines and other platform vaccine tech). My personal tentative guess is that our split should be something like 4 : 4 : 1 between academia, industry and non-profit research (academia to drive long-term fundamental advances, industry/entrepreneurship to translate past basic science advances into defensive products, and non-profit research to do work that can't be done elsewhere).

Crux 1 is indeed the time horizon - if you think the problem you want to work on will be solved in 20 years/it will be too late, then dropping 'long-term fundamental advances' in the portfolio would seem reasonable.

Crux 2 is how much academia constrains the type of work you can do (the 'bad academic incentives'). I resonate with Adam's comment here. I can also think of many examples of groundbreaking basic science that looks defensive and gets published very well (e.g. again sequencing innovations, vaccine tech; or, for a recent example, several papers on biocontainment published in Nature and Science).

Hey Jan and Howie,

thanks very much for the clarifying discussion. The fact that there is this discussion (also looking at the high number of votes for the comments) illustrates that there is at least some confusion around rating EA org vs. non-EA org careers, which is a bit concerning in itself.

FWIW my original claim was not that people (neither 80k nor community members) get the rational analysis part wrong. And a career path where actual impact is a few years off should totally get a reduced expected value & rating. (My claim in the initial post is that many of the other paths are still competitive with EA org roles.) There is little actual disagreement that quant trading is a great career.

My worry is that many soft factors may cause people to develop preferences that are not in line with the EV reasoning, and that may reduce motivation and/or lead to people overly focused on jobs at explicit EA employers.

Also, you lack a 'stamp of approval' from 80k when you pursue some of these careers that you kind of don't need when doing a 'standard' path like working at CEA/FHI/80k/OPP or do a top ML PhD, even if all of them were rated 10. (In coaching days this was better, because you could just tell your doubting student group leader that this is what 80k wants you to do :) )

Hey, I'm thinking of professional 'groups' or strong networks without respect to geography, though I would guess that some professions will cluster around certain geographies. E.g. in finance you'd expect EAs to be mainly in London, Frankfurt, New York etc. And it would be preferable for members to be in as few locations as possible.

I agree that local groups are very important, and plausibly more important, than professional groups. However, local groups work largely by getting members more involved in the community and providing 'push' factors to go into EA careers. I think the next frontier of community building will be to add these 'pull' factors. We have made a lot of progress on the local groups side, now it is time to think about the next challenge.

Re professional community builders: this is already happening & good. But they are largely working on getting members more engaged, rather than building strong professional 'core' communities (though some people do work in this direction, it is not a main focus).

I suspect the driving force will be volunteers at the start, similar to how student groups got started initially. These would be people that are already well-connected and have some experience in their field. This would also get around the issue that EA orgs may currently not have resources for such projects. I doubt funding will be an issue hif the volunteers meet these properties.

You could try to model by estimating how (i) the talent needs and (ii) the talent availability will be distributed if we further scale the community.

(i) If you assume that the EA community grows, you may think that the percentage of different skillsets that we need in the community will be different. E.g. you might believe that if the community grows by a factor of 10, we don't need 10x as many people thinking about movement building strategy (the problems size increases not linearly with the number of people) or entrepreneurial skills (as the average org will be larger and more established), but an increase by a factor of say 2-5 might be sufficient. On the other hand, you'd quite likely need ~10x as many ops people.

(ii) For the talent distribution, one could model this using one of the following assumptions:

1) Linearly scale the current talent distribution (i.e. assume that the distribution of skillsets in the future community would be the same as today).

2) Assume that the future talent distribution will become more similar to a relevant reference class (e.g. talent distribution for graduates from top unis)

A few conclusions e.g. I'd get from this

  • weak point against skills building in start-ups - if you're great at this, start stuff now

  • weak point in favour of building management skills, especially with assumption 1), but less so with assumption 2)

  • weak point against specialising in areas where EA would really benefit from having just 2-3 experts but unlikely need many more (e.g. history, psychology, institutional decision making, nanotech, geoengineering) if you're also a good fit for sth else, as we might just find them along the way

  • esp. if 2), weak points against working on biorisk (or investing substantially in skills building in bio) if you might be an equal fit for technical AI safety, as the maths/computer science : biologists ratio at most unis is more 1 : 1 (see https://www.hesa.ac.uk/news/11-01-2018/sfr247-higher-education-student-statistics/subjects), but we probably want to have 5-10x as many people working on AI rather than biorisk. [The naive view using current talent distribution might suggest that you should work on bio rather than AI if you're an equal fit, as the current AI : bio talent ratio seems to be > 10: 1]

All of this is less relevant if you believe in high discount rates on work done now rather than in 5-10 years.

Thanks a lot for sharing this. The topics and readings lists strike me as pretty well chosen and interesting. This could be a very useful resource for local groups running discussion groups.