One of the coolest EA things I saw during the pandemic was the creation of the microCOVID risk tracker by an EA group house in San Francisco. To me, it was a really inspiring example of the principles of effective altruism in action — using rationality and curiosity to solve a concrete problem to make people’s lives better.
I was having a dinner party with some friends last night with a theme of how we could improve indoor air safety, starting with our local community in New York. (Some background here on how my colleagues at 1Day Sooner and i think about the air safety problem). How can we get buildings to clean the air (by filtering it, mixing it with outdoor air, and sterilizing it with ultraviolet light) so that people don’t suffer from pollution and pathogens?
We were discussing what was feasible to accomplish politically and were struggling because a standard answer to “what air safety interventions are optimal for a space to adopt?” doesn’t yet exist. We agreed that it would be uniquely valuable to recruit early adopters (e.g. tech companies, private schools, universities) to try out solutions and test them for effectiveness in reducing disease. If well-designed, this could generate experimental evidence on effectiveness and create a template for later adopters and governments to implement.
An obvious place to start would be the EA community and trying to get EA spaces to implement air safety measures (like installing filters and upper-room UV light). There are a number of organizations that could fit the bill, and I’m aware of at least one that is exploring doing this in their own office.
One suggestion that uniquely resonated with me was the idea that the next EA Global (after EAG DC) should make its air safe. (That is, it should have a respiratory infection risk level it tries to achieve, some surrogate targets it aims to measure, and a set of indoor air interventions that are reasonably likely to achieve the intended risk level).
I don’t think this will be easy and in fact I think it might be more likely than not that we fail. But part of what is valuable about EA is our commitment to learning from failure and improve over time. Trying to implement air safety interventions will teach us about the existing gaps that need to be filled, which will get us closer for the next EAG (and EAGx) until we get to a point where we’re proud of our community for becoming safer and a better model for achieving good outcomes elsewhere.
I recognize it already takes a tremendous amount of effort to run EA Global, and I appreciate the work CEA does putting these events on. So my intention is not to create additional burden. But biosecurity is a cause many EAs are passionate about, and air safety is one of the most promising interventions to achieve deterrence-by-denial of engineered respiratory biothreats. I feel like making our own spaces safe from pathogens is a challenge that our community can and should rise to and that doing so will have outsized benefits on our ability to accomplish future policy. If you're interested in helping with this, let me know.
At EAGx Berlin just now, I and a few others discussed 80/20 interventions.
My first suggestion was mandatory FFP2 or better masks indoors and many outdoors activities, ideally with some sort of protection from rain – a roof or tent.
Another participant anticipated the objection that people would probably object to that that it’s harder to read facial expressions with masks, which could make communication harder for those people who are good at using and reading facial expressions. A counter-suggestion was hence to mandate masks only for the listeners during talks since that is a time when they might fill a room with Covid spray but don’t need to talk.
Improving the air quality is another good option that I do a lot at home but haven’t modeled. It feels like one that is particularly suitable to EA offices and group houses.
The Less Wrong Community Weekend in Berlin was successful with very rigorous testing every day with the most sensitive test that is available.
All in all I would just like to call for a lot more risk modeling to get a better idea of the magnitude of the risks to EA and EAs, and then proportionate solutions (technical or social) to mitigate the various sources of risk. Some solution may be better suited for short events, some for offices and group houses.
This seems all easily important enough that someone should quantitatively model it.
I did the math for the last EAG London, though I underestimated the attendee count by 3–4x. (Does someone know the number?)
Without mask, the event cost 6 years of EA time (continuous, so 24 hours in a day, not 8 h). Maybe it was worth it, maybe not, hard to tell. But if everyone had worn N95 or better masks, that would’ve been down to about 17 days. They could’ve kept about 100% of the value of EAG while reducing the risk to < 1%.
If the event really had more like 900 attendees, then that’s almost 20 years of EA time that is lost in expectation through these events. I’m not trying to model this conservatively; I don’t know in which direction I’m erring.
One objection that I can see is that maybe this increases the time lost from EAGs by some low single digit factor, and since the event is only 3 days long, that doesn’t seem so bad on an individual level. (Some people spend over a week on a single funding application, so if it’s rejected, maybe that comes with a similar time cost.)
Another somewhat cynical objection could be that maybe there’s the risk that someone doesn’t contribute to the effective altruism enterprise throughout two decades of their life because they were put off by having to wear a mask and so never talked to someone who could answer their objections to EA. Maybe losing a person like that is as bad as a few EAs losing a total of 20 years of their lives. This seems overly cynical to me, but I can’t easily argue against it either.
My Guesstimate model is here.