I’ve seen a few people in the LessWrong community congratulate the community on predicting or preparing for covid-19 earlier than others, but I haven’t actually seen the evidence that the LessWrong community was particularly early on covid or gave particularly wise advice on what to do about it. I looked into this, and as far as I can tell, this self-congratulatory narrative is a complete myth.
Many people were worried about and preparing for covid in early 2020 before everything finally snowballed in the second week of March 2020. I remember it personally.
In January 2020, some stores sold out of face masks in several different cities in North America. (One example of many.) The oldest post on LessWrong tagged with "covid-19" is from well after this started happening. (I also searched the forum for posts containing "covid" or "coronavirus" and sorted by oldest. I couldn’t find an older post that was relevant.) The LessWrong post is written by a self-described "prepper" who strikes a cautious tone and, oddly, advises buying vitamins to boost the immune system. (This seems dubious, possibly pseudoscientific.) To me, that first post strikes a similarly ambivalent, cautious tone as many mainstream news articles published before that post.
If you look at the covid-19 tag on LessWrong, the next post after that first one, the prepper one, is on February 5, 2020. The posts don't start to get really worried about covid until mid-to-late February.
How is the rest of the world reacting at that time? Here's a New York Times article from February 2, 2020, entitled "Wuhan Coronavirus Looks Increasingly Like a Pandemic, Experts Say", well before any of the worried posts on LessWrong:
The tone of the article is fairly alarmed, noting that in China the streets are deserted due to the outbreak, it compares the novel coronavirus to the 1918-1920 Spanish flu, and it gives expert quotes like this one:
The worried posts on LessWrong don't start until weeks after this article was
Ajeya Cotra writes:
Like Ajeya, I haven't thought about this a ton. But I do feel quite confident in recommending that generalist EAs — especially the "get shit done" kind — at least strongly consider working on biosecurity if they're looking for their next thing.
EA Global is coming to New York City for the very first time, from October 10–12 at the Sheraton Times Square! And you can apply now! Why NYC, you might ask?
1. Close to policy
With the United Nations based in NYC and DC just a train ride away, NYC is well-placed to host policy professionals working on pressing global issues like AI governance, pandemic preparedness, foreign aid, and more.
2. Media capital
NYC is often called the media capital of the world, hosting major publishers and media outlets. We’re excited to welcome both writers and communications professionals to this event.
3. Philanthropic hub
NYC is home to some of the world’s most influential philanthropic organizations. It’s also a base for funders supporting projects across global health, biosecurity, AI safety, and more. We’re excited to welcome both grantmakers and those pursuing earning to give.
4. Animal welfare, biosecurity, and digital minds
Each EA Global event is partly shaped by its location, influenced by the nearby professional networks and communities. At NYC, we expect to see more experts in animal welfare, digital minds, and biosecurity, drawing from existing communities in NYC, Boston, and DC.
5. One of the largest EA communities
NYC hosts one of the biggest and most active local EA communities globally!
Apply here by September 28!
EU opportunities for early-career EAs: quick overview from someone who applied broadly
I applied to several EU entry programmes to test the waters, and I wanted to share what worked, what didn’t, and what I'm still uncertain about, hoping to get some insights.
Quick note: I'm a nurse, currently finishing a Master of Public Health, and trying to contribute as best I can to reducing biological risks. My specialisation is in Governance and Leadership in European Public Health, which explains my interest in EU career paths. I don’t necessarily think the EU is the best option for everyone. I just happen to be exploring it seriously at the moment and wanted to share what I’ve learned in case it’s useful to others.
⌨️ What I applied to & how it went
* Blue Book traineeship – got it (starting October at HERA.04, Emergency Office of DG HERA)
* European Committee of the Regions traineeship – rejected in pre-selection
* European Economic & Social Committee traineeship – same
* Eurofound traineeship – no response
* EMA traineeship (2 applications: Training Content and Vaccine Outreach) – no response
* Center for Democracy & Technology internship – no response
* Schuman traineeship (Parliament) – no response
* EFSA traineeship – interview but no feedback (I indicated HERA preference, so not surprised)
If anyone needed a reminder: rejection is normal and to be expected, not a sign of your inadequacy. It only takes one “yes.”
📄 Key EA Forum posts that informed and inspired me
* “EAs interested in EU policy: Consider applying for the European Commission’s Blue Book Traineeship”
* “What I learned from a week in the EU policy bubble” – excellent perspective on the EU policymaking environment
🔍 Where to find EU traineeships
All together here:
🔗 https://eu-careers.europa.eu/en/job-opportunities/traineeships?institution=All
Includes Blue Book, Schuman, and agency-specific roles (EMA, EFSA, ECDC...).
Traineeships are just traineeships: don’t underestimate what
AIxBio looks pretty bad and it would be great to see more people work on it
* We're pretty close to having a country of virologists in a data center with AI models that can give detailed and accurate instructions for all steps of a biological attack — with recent reasoning models, we might have this already
* These models have safeguards but they're trivial to overcome — Pliny the Liberator manages to jailbreak every new model within 24 hours and open sources the jailbreaks
* Open source will continue to be just a few months behind the frontier given distillation and amplification, and these can be fine-tuned to remove safeguards in minutes for less than $50
* People say it's hard to actually execute the biology work, but I don't see any bottlenecks to bioweapon production that can't be done by a bio undergrad with limitless scientific knowledge; on my current understanding, the bottlenecks are not manual dexterity bottlenecks like playing a violin which require years of practice, they are knowledge bottlenecks
* Bio supply chain controls that make it harder to get ingredients aren't working and aren't on track to work
* So it seems like we're very close to democratizing (even bespoke) bioweapons. When I talk to bio experts about this they often reassure me that few people want to conduct a biological attack, but I haven't seen much analysis on this and it seems hard to be highly confident.
While we gear up for a bioweapon democracy it seems that there are very few people working on worst-case bio, and most of the people working on it are working on access controls and evaluations. But I don't expect access controls to succeed, and I expect evaluations to mostly be useful for scaring politicians, due in part to the open source issue meaning we just can't give frontier models robust safeguards. The most likely thing to actually work is biodefense.
I suspect that too many people working on GCR have moved into working on AI alignment and reliability issues and
4
[anonymous]
9mo
I've been doing some data crunching, and I know mortality records are flawed, but can anyone give feedback on this claim:
Nearly 5% of all deaths (1 in 20) in the entire world occur from direct primary causation recorded due to just 2 bacterial species, S. Aureus and S. Pneumoniae.
I'm doing a far UVC write up on whether it could have averted history's deadliest pandemics. Below is a snippet of my reasoning when defining 'CURRENT' trends in s-risk bio.
----------------------------------------
Analysis of pathogen differentials:
2021-2024 data: Sources Our World in Data, Bill and Melinda Gates Foundation, CDC, FluStats, WHO, 80 000 hours
Figure 8: Comparison of number of identified and cultured strains of pathogen types
Figure 9: Comparison of number of strains pathogenic to humans by pathogen types
From the data, despite a considerable amount of identified strains of fungi and protists, the percentage of the strains of those pathogen types that can pose a threat to humans is low (0.2% and 0.057%) so the absolute amount of strains pathogenic to humans from different pathogen types remains similar to viruses, and becomes outweighed by pathogenic bacteria.
Archaea have yet to be identified as posing any pathogenic potential for humans, however, a limitation is that identification is sparse and candidates of extremophile domains tend to be less suitable for laboratory culture conditions.
The burden of human pathogenic disease appears clustered from a small minority of strains of bacterial, viral, fungal and Protoctista origin.
Furthermore, interventions can be asymmetrical in efficacy. Viral particles tend to be much smaller than bacterial or droplet based aerosols, so airborne viral infections such as measles would spread much quicker in indoor spaces and would not be meaningfully prevented by typical surgical mask filters. Whilst heavy droplet particles or bodily fluid transmission such as of colds or HIV can be more effectively prev
4
[anonymous]
9mo
1
The recent pivot by 80 000 hours to focus on AI seems (potentially) justified, but the lack of transparency and input makes me feel wary.
https://forum.effectivealtruism.org/posts/4ZE3pfwDKqRRNRggL/80-000-hours-is-shifting-its-strategic-approach-to-focus
TLDR;
80 000 hours, a once cause-agnostic broad scope introductory resource (with career guides, career coaching, online blogs, podcasts) has decided to focus on upskilling and producing content focused on AGI risk, AI alignment and an AI-transformed world.
----------------------------------------
According to their post, they will still host the backlog of content on non AGI causes, but may not promote or feature it. They also say a rough 80% of new podcasts and content will be AGI focused, and other cause areas such as Nuclear Risk and Biosecurity may have to be scoped by other organisations.
Whilst I cannot claim to have in depth knowledge of robust norms in such shifts, or in AI specifically, I would set aside the actual claims for the shift, and instead focus on the potential friction in how the change was communicated.
To my knowledge, (please correct me), no public information or consultation was made beforehand, and I had no prewarning of this change. Organisations such as 80 000 hours may not owe this amount of openness, but since it is a value heavily emphasises in EA, it seems slightly alienating.
Furthermore, the actual change may not be so dramatic, but it has left me grappling with the thought that other mass organisations could just as quickly pivot. This isn't necessarily inherently bad, and has advantageous signalling of being 'with the times' and 'putting our money where our mouth is' in terms of cause area risks. However, in an evidence based framework, surely at least some heads up would go a long way in reducing short-term confusion or gaps.
Many introductory programs and fellowships utilise 80k resources, and sometimes as embeds rather than as standalone resources. Despite claimi