Let's make nice things with biology. Working on biosecurity at iGEM. Organizing East Bay Biosecurity from Toronto. Website: tessa.fyi


Announcing riesgoscatastroficosglobales.com

¡Bien hecho! Una iniciativa que me parece relacionada es La Iniciativa para la Seguridad Global una organización « dedicada a la No Proliferación de Armas de Destrucción Masiva » que era formada en 2020. Unos de sus expertos en bioseguridad y armas nucleares podría estar interesado en unirse a esta red.

How can economists best contribute to pandemic prevention and preparedness?

Thanks for this post! I love a good research agenda. Some other relevant bits of work:

Gifted $1 million. What to do? (Not hypothetical)
Answer by tessaAug 30, 202117

Congratulations! According to the Founder's Pledge FAQ, anyone who holds equity in a company can participate. They offer a bunch of High-Impact Giving Support. You might be able to book a call with them and get advice about how to efficiently donate equity.

Are we "trending toward" transformative AI? (How would we know?)

I think you have an acronym collision here between HLMI = "human-level machine intelligence" = "high-level machine intelligence". Your overall conclusion still seems right to me, but this collision made things confusing.


I got confused because the evidence provided in footnote 11 didn't seem (to me) like it implied "that the researchers simply weren't thinking very hard about the questions". Why would "human-level machine intelligence" imply the ability to automate the labour of all humans?

My confusion was resolved by looking up the definition of HLMI in part 4 of Bio Anchors. There, HLMI is referring to "high-level machine intelligence". If you go back to Grace et al. 2017, they defined this as:

“High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers.

This seems stronger to me than human-level! Even "AI systems that can essentially automate all of the human activities needed to speed up scientific and technological advancement" (the definition of PASTA above) could leave some labour out, but this definition does not.

I think your conclusion is still right. There shouldn't have been a discrepancy between the forecasts for HLMI and "full automation" (defined as "when for any occupation, machines could be built to carry out the task better and more cheaply than human workers"). Similarly, the expected date for the automation of AI research, a job done by human workers, should not be after the expected date for HLMI.

Still, I would change the acronym and maybe remove the section of the footnote about individual milestones; the milestones forecasting was a separate survey question from the forecasting of automation of specific human jobs, and it was more confusing to skim through Grace et al. 2017 expecting those data points to have come from the same question.

Examples of Successful Selective Disclosure in the Life Sciences

Aw, it's always really nice to hear that people are enjoying the words I fling out onto the internet!

Often both the benefits and risks of a given bit of research are pretty speculative, so evaluation of specific cases depends on one's underlying beliefs about potential gains from openness and potential harms from new life sciences insights. My hope is that there are opportunities to limit the risks of disclosure while still getting the benefits of openness, which is why I want to sketch out some of the selective-disclosure landscape between "full secrecy by default" (paranoid?) and "full openness by default" (reckless?).

If you're like to read a strong argument against openness in one particular contentious case, I recommend Gregory Koblentz's 2018 paper A Critical Analysis of the Scientific and Commercial Rationales for the De Novo Synthesis of Horsepox Virus. From the paper:

This article evaluates the scientific and commercial rationales for the synthesis of horsepox virus. I find that the claimed benefits of using horsepox virus as a smallpox vaccine rest on a weak scientific foundation and an even weaker business case that this project will lead to a licensed medical countermeasure. The combination of questionable benefits and known risks of this dual use research raises serious questions about the wisdom of undertaking research that could be used to recreate variola virus.


The putative benefit to synthesizing horsepox virus for use as a smallpox vaccine rests on four assumptions made by Tonix: that the modern-day smallpox vaccine based on vaccinia virus is directly descended from horsepox virus, that ancestral horsepox virus is a safer candidate for a human vaccine than derived vaccinia virus, that current smallpox vaccines are not safe enough, and that there is a significant demand for a new smallpox vaccine. All four of these scientific and commercial claims need to be true to fully realize the expected benefit of synthesizing horsepox virus. I argue that there are serious doubts that all of these assumptions are valid, raising important questions about the wisdom of synthesizing this virus given the risks posed by pioneering a technique that could be used to recreate variola virus.

Mental Health Resources tailored for EAs (WIP)

Lynette Bye's post on Resources On Mental Health And Finding A Therapist, which you link in the first footnote, also includes links out to several resources:

Among effective altruists, there’s a particular pattern of mental health problems related to feeling guilty about not doing enough to help the world: feeling guilty setting personal boundaries, or worrying that you’re not smart enough to make a difference, or thinking that what you’re doing is good but just not “good enough” to matter. Desperation Hamster Wheels is a great description of one EA’s experience with this, Helen Toner’s sustainable motivation talk is good, and the Replacing Guilt series on Minding Our Way seems to be particularly valuable in helping find a healthy balance to these thoughts so that you actually make progress toward your goals.

If you want more resources, Scott Alexander’s new psychiatry practice has a growing database of mental-health-related resources. Ewelina Tur, an EA therapist, has a list of mental health resources here, including workbooks, mental health apps, and audiobooks. The EA Mental Health Navigator website has a list of virtual mental health resources here.

Mental Health Resources tailored for EAs (WIP)

I've found many of Julia Wise's posts to be useful for pushing against "maximize ONLY EA THINGS" tendencies in my life, in particular:

  • Cheerfully - a post about how, even if you agree with the younger Julia's statement that "my happiness is not the point", you need to find a way to relate to your future as more than an obligation
  • You have more than one goal, and that's fine - trying to abandon the feeling that "the harsh light of cost-effectiveness" should be turned on everything you do (a somewhat similar mood is expressed in Scott Alexander's post Nobody is perfect, everything is commensurable but I found Julia's post to be more closely targeted at my emotional motivations)
  • Burnout and self-care - lessons on sustainable, bounded generosity drawing on Julia's time as a social worker

I would also recommend browsing the posts that have been upvoted under the effective altruism lifestyle and self-care tags.

Lessons from Running Stanford EA and SERI

This post is delightful! I really appreciate how much effort you put into offering honest context (including things like funding and per-week hourly commitments). Especially in combination with your discussion of mindset, friendship and motivation, and the detailed best practices (with links out to resource documents! nice!) the work that went into this post (and Stanford EA more broadly) makes sentences like the following:

I think that our growth is replicable, and that you do not need to be a superstar public speaker or highly experienced organizer to run a successful group (I sure wasn’t).

ring true! Congrats on writing such solid motivation fuel.

Questions for Howie on mental health for the 80k podcast

Are there cognitive distortions that you think members of the EA community should particularly watch out for? If you've managed to move past some cognitive distortions that previously had a negative impact on you, what helped?

How students, groups, and community members can use funding

Under technology investments, I think investing in A/V equipment can be valuable if your group ever does in-person talks!

  • Amplified voices (i.e. buying a microphone + speaker setup) can be a huge boon to group members who are hard-of-hearing or have audio processing issues
  • A projector or large flatscreen can make it much easier to read slides
  • USB Lavalier / Lapel microphones can allow you to record high-quality audio during talks
  • I have never considered buying a full-on camera for recording talks, but I have wished I had a better tripod to hold my phone
Load More