Tessa

Working (6-15 years of experience)
220875018 Paris, FranceJoined Jan 2017
tessa.fyi

Bio

Let's make nice things with biology. Working on biosecurity at iGEM. Also into lab automation, event production, donating to global health. From Toronto, lived in the SF Bay, currently à Paris. Website: tessa.fyi

Comments
165

Topic Contributions
30

There is definitely a lot of further research on some of these specific ideas (I tried to link out to a few projects), but I don't know of a ton of comparative research on them. It's possible there are internal ITN estimates at some grantmaking orgs? And this graph from  Technologies to Address Global Catastrophic Risk is in the right direction (but doesn't focus on neglectedness):


Additionally, I believe some organizations in the EA community (e.g. Open Philanthropy and Convergent Research) working on deeper strategic / comparative investigations of possible biorisk mitigation efforts. 

Inside-view understanding of policymaking in major / emerging bioeconomies outside the US/Europe. I'm thinking BRICS (Brazil, Russia, India, China and South Africa) but also countries that will have huge economies/populations this century, like Nigeria, Indonesia, Pakistan, and the DRC, countries with BSL-4 labs, and places with regulatory environments that allow broader biotechnology experimentation (e.g. Israel, Singapore).

I don't know how much of her time Jennifer Doudna spends thinking about bioweapons, but I do think she spends a lot of time thinking about the ethical implications of CRISPR. If you read things like this NYT interview with her from last week she's saying things like:

Interviewer: It’s also easy to imagine two different countries, let alone two different people, having competing ideas about what would constitute ethical gene editing. In an optimal world, would there be some sort of global body or institution to help govern and adjudicate these decisions? In an optimal world? This is clearly a fantasy.

OK, how about a suboptimal one? The short answer is: I don’t know. I could imagine that given the complexities of using genome editing in different settings, it’s possible that you might decide to use it differently in different parts of the world. Let’s say an area where a mosquito-borne disease is endemic, and it’s dangerous and high risk for the population. You might say the risk of using genome editing and the gene drive to control the mosquito population is worth it. Whereas doing it somewhere else where you don’t face the same public-health issue, you might say the risk isn’t worth it. So I don’t know. The other thing is, as you indicated with the way you asked the question, having any global regulation and enforcing it — hard to imagine how that would be achieved. It’s probably more realistic to have, as we currently do, scientific entities that are global that study these complex issues and make formal recommendations, work with government agencies in different countries to evaluate risks and benefits of technologies.

This doesn't seem like a person who is just arguing "CRISPR should be everywhere, for everyone". I also think she is not claiming to be an expert at making bioethical determinations of what technology should be deployed, and my sense from hearing her public speaking is that she is reluctantly taking on a mantle of going around and saying that we all need to have a very sober and open discussion about where and how CRISPR should be used, but that she doesn't feel particularly qualified to make those determinations herself. The Innovative Genomics Institute, which she co-founded, has an entire research area dedicated to Public Impact, including initiatives like the Berkeley Ethics and Regulation Group for Innovative Technologies. You can argue that these actions are poorly targeted, but I don't think it's accurate to frame Doudna as a naively pro-technology actor.

You say "we can't control drugs, guns, or even reckless driving". I don't think that's entirely true. For example, the RAND meta-analysis What Science Tells Us About the Effects of Gun Policies shows moderate evidence that violence crime can be reduced by prohibitions associated with domestic violence, background checks, waiting periods, and stand-your-ground laws. Similarly, I believe that progress in car safety engineering has radically reduced the human suffering caused by reckless driving. I have heard biosecurity professionals use cars as an example of a technology that was deliberately and successfully engineered to be safer.

I also suspect the learning you describe ("anyone who has reached the level of expert in a field like genetic engineering has too large of a personal investment") is too strong a conclusion to draw from your experience. People infer a lot about what it might be like to engage with someone from how they attempt dialogue; I don't know what the content of your posts was, but posting similar content every day seems likely to cause observers to conclude that you have very strongly-held beliefs and are willing to violate social norms to attempt to spread those beliefs, which might lead them to decide that engaging in dialogue with you would be unpleasant or unproductive.

(I will note that I hesitated to write this reply because of the tone of your comment, but then didn't want the only comment on a post targeted towards people interested in the field of biosecurity to be so despairing about its prospects; I personally believe there is a lot of useful work that can be done to reduce risks from pandemics.)

I would love to see other more targeted and ambitious efforts to influence others where the KPI isn't the number of highly-engaged EAs created.


+1, EA is a philosophical movement as well as a professional and social community.

I agree with this post that it can be useful to spread the philosophical ideas to people who will never be a part of the professional and social community. My sense from talking to, for example, senior professionals who have been convinced to reallocate some of their work to EA-priority causes is that this can be extremely valuable. Or, I've heard some people say they value a highly-engaged EA far more than a semi-engaged person, but I think they are probably underweighting the value of mid-to-senior people who do not become full-blown community members but nevertheless are influenced put some of their substantial network and career capital towards important problems.

On a separate note, I perceive an extremely high overlap between the "professional" and "social" for the highly-engaged EA crowd. For example, my sense is that it's fairly hard to get accepted to EA Global if your main EA activity is donating a large portion of your objectively-high-but-not-multimillionaire-level tech salary", i.e. you must be a part of the professional community to get access to the social community. I think it would be good to [create more social spaces for EA non-dedicates](https://forum.effectivealtruism.org/posts/aYifx3zd5R5N8bQ6o/ea-dedicates).

This feels very related to the recent post Most Ivy-smart students aren't at Ivy-tier schools, which notes near the beginning: 

I don't address/argue the normative claim that EA should focus less on college rank at the individual (e.g., hiring) and/or community (e.g., which schools' EA groups to invest more resources in developing) levels, but that would indeed be a non-crazy takeaway if the post makes you update in the direction I expect it to, on average!  

While referencing the 7 Generations principle, I would credit it to "the Iroquois confederacy" or "the Haudenosaunee (Iroquois) confederacy" rather than "the Iroquois tribe". There isn't one tribe associated with that name; it's an alliance formed by the Mohawk, Oneida, Onondaga, Cayuga and Seneca (and joined  by the Tuscarora in 1722).

(Aside: In Ontario, where I'm from, we tend to use the word "nation" rather than "tribe" to refer to the members of the confederacy, but it's possible this is a US/Canada difference, and the part that bothered me was the inaccuracy of the singular more than the specific word choice.)

Thanks for putting together the summary, I enjoyed reading it!

I really liked this post, and resonate strongly with the sentiment of "Nothing can take donating away from me, not even a bad day". 

Although I do direct work on biosecurity,  my donations (~15% gross income) go almost entirely to global health and wellbeing, and some of this is because I want to be reassured that I had a positive impact, even if all my various speculative research ideas (and occasional unproductive depressive spirals) amount to nothing.

I would be curious how you feel that intersects with the wording of the GWWC pledge, which includes 

I shall give __ to whichever organisations can most effectively use it to improve the lives of others

As the sort of pedant who loves a solemn vow, I wonder if my global health and wellbeing donations are technically fulfilling this pledge, based on my judgements of how to improve the lives of others. That said, this only bothers me a little because, you know, this mess of incoherent commitments is out here giving what she can, and I recognize that might not meet a theoretical threshold of "most effective".

Relatedly, an area where I think arXiv could have a huge impact (in both biosecurity and AI) would be setting standards for easy-to-implement manged access to algorithms and datasets.

This is something called for in Biosecurity in an Age of Open Science:

Given the misuse potential of research objects like code, datasets, and protocols, approaches for risk mitigation are needed. Across digital research objects, there appears to be a trend towards increased modularisation, i.e., sharing information in dedicated, purpose built repositories, in contrast to supplementary materials. This modularisation may allow differential access to research products according to the risk that they represent. Curated repositories with greater access control could be used that allow reuse and verification when full public disclosure of a research object is inadvisable. Such repositories are already critical for life sciences that deal with personally identifiable information.

This sort of idea also appears in New ideas for mitigating biotechnology misuse under responsible access to genetic sequences and in Dual use of artificial-intelligence-powered drug discovery as a proposal for managing risks from algorithmically designed toxins.

The paper Existential Risk and Cost-Effective Biosecurity makes a distinction between Global Catastrophic Risk and an Existential Risk in the context of biological threats:

Quoting the caption from the paper: A spectrum of differing impacts and likelihoods from biothreats. Below each category of risk is the number of human fatalities. We loosely define global catastrophic risk as being 100 million fatalities, and existential risk as being the total extinction of humanity. Alternative definitions can be found in previous reports, as well as within this journal issue.
Load More