Boston-based, NAO Co-Lead, GWWC board member, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise. Full list of EA posts: jefftk.com/news/ea
There is actually this study from the national labs that indicate filters hardly need replacement at all. I think it is even worse than razorblades that grow dull - filters last forever!
That study is looking at nuclear facilities, but I'm not sure it generalizes to environments with more particulates in the air. In a dusty enough environment your filter will surely get clogged up to the point where you're not able to move much air through it!
I do have my question about your design (and GPT o4-mini-high is pretty skeptical), wondering whether the fan really draws all the air through the filters.
When I tested the prototype it worked well: https://www.jefftk.com/p/ceiling-air-purifier
Now that I know more about how these fans move air, I think it would work even better if the filters extended slightly lower.
We all really appreciate the work you've done for the Forum over these seven years!
Let me show you what the Forum 1.0 looked like seven years ago. ... It didn’t work on mobile
And now time for quibbles! The mobile implementation was a bit of a hack, and there were occasional elements that were way the wrong size, but it did work! (#65, initial effort). You can play with the archive; here's the old Forum on 2017-02-12.
The new one is of course way better ;)
@Julia_Wise🔸 gets credit for this! My draft had two paragraphs for "Setting aside that some people don't have the economic breathing room to make this kind of tradeoff".
did you work on side projects, do some sort of fellowship, or jump straight to applying for jobs in biosecurity?
I did a lot of reading, and a side project around air filtration (1, 2, 3, 4, 5, 6, 7) that I don't think was all that helpful, but mostly I talked to people in the field about what was missing. I think it helped a lot that I was a bit of a known quantity: I'd been writing publicly for a long time which let people have a sense of what to expect from me.
why/how did you choose the Nucleic Acid Observatory?
Based on my reading and talking with people in biosecurity I thought the NAO was aiming to solve a really important problem and it had a lot of good people but the group as a whole was too academic: not enough experience building things, learning by doing, or moving quickly. This seemed like a project where my skills were very complementary, and I think that did end up being the case.
interested in pivoting to either biosecurity or cybersecurity
Great to hear! I don't have a good sense of what would make sense. The NAO isn't currently hiring, but at some point it's possible we'll be looking for engineers and for candidates who were sufficiently strong elsewhere not having a bio background wouldn't be a blocker.
I do think working on side projects is often pretty good for getting a sense of whether you like the work and other people seeing what you can do. And with LLMs it's easier than ever to get spun up in new domains.
I don't really have good advice on how to get into the field, though; sorry!
The social norms of EA or at least the EA Forum are different today than they were ten years ago. Ten years ago, if you said you only care about people who are either alive today or who will be born in the next 100 years, and you don’t think much about AGI because global poverty seems a lot more important, then you would be fully qualified to be the president of a university EA group, get a job at a meta-EA organization, or represent the views of the EA movement to a public audience.
This isn't just a social thing, it's also response to a lot of changes in AI timelines over the past ten years. Back then a lot of us had views like "most experts think powerful AI is far off, I'm not going to sink a bunch of time into how it might affect my various options for doing good", but as expert views have shifted that makes less sense. While "don’t think much about AGI because global poverty seems a lot more important" is still a reasonable position to hold (ex: people who think we can't productively influence how AI goes and so we should focus on doing as much good as we can in areas we can affect), I think it requires a good bit more reasoning and thought than it did ten years ago.
(On the other hand, I see "only care about people who are either alive today or who will be born in the next 100 years" as still within the range of common EA views (ex).)
Efficiency in terms of fraction of removed particles wouldn't decrease, but because CFM will decrease efficiency in terms of CADR will too.