Boston-based, Director of Detection at SecureBio, GWWC board member, parent, musician. Switched from earning to give to direct work in pandemic mitigation. Married to Julia Wise. Speaking for myself unless I say otherwise. Full list of EA posts: jefftk.com/news/eaÂ
@Ben Kuhn has a log at https://www.benkuhn.net/ea/ , though the last donation is 2019. Â I don't know if that's putting giving on hold vs no longer updating the list?
I think the highest priority question is probably something about how quickly you expect AI to go. I'm mostly not planning anything 8 years out, because I think the world is likely to be massively different.
The next highest priority question is something about how you expect AI to impact your various potential career options. For example, it seems plausible to me that almost all dental care could be moderately skilled people wearing high resolution cameras and receiving real-time advice from AIs with current tech, let alone near-future tech, and 8 years is a long time for society to catch up. On the other hand, CS is becoming automated even more rapidly!
So this isn't really the question you're asking, but I'd prioritize learning how to get the most out of frontier AI systems: getting good at specifying what you want and recognizing whether you've received that. A lot of this is traditionally management. I'd try to do as much of this while in college as possible, but not stay any longer than necessary; 4y is already a very long time.
Yup! Linked from the bottom of the post.
Also (not linked; learned about it in comments) a nursing home
meltblown polypropylene may be more capable of surge than I previously assumed
Note that this factory was just producing polypropylene pellets, not melt-blown fabric or masks themselves.
The pellets also last ~indefinitely if well stored (no UV, no heat, minimal oxygen, low humidity), and so are well suited for stockpiling. But you'd probably want to move up the chain and stockpile the fabric instead, or perhaps N95s themselves, or perhaps reusable respirators, ...
Currently, when I see something that reads as AI written that's a pretty strong signal that the nominal author doesn't fully stand behind the post. Â I really hate it when I engage deeply with the arguments in a post and write a carefully reply, only to learn that the author wasn't really trying to say that and didn't review the output of their AI carefully enough.
I think maybe that wasn't public until 2026-01 with Dario's "All of Anthropic’s co-founders have pledged to donate 80% of our wealth"?
It looks to me like you can't be confident that the matcher who steps in is someone other than the funder, and the funder being their own matcher-of-last-resort destroys the counterfactuality.
Let's say I intend to donate $2X to a charity. I use your system, with a pot of $X. If people donate $X, I send an additional $X to my charity some other way and it receives a total of $3X. If people donate $0 I anonymously use my second $X to meet the terms of the smart contract, and it receives a total of $2X (same as if I'd not set up this match). My $2X went to the charity regardless, and no one who contributed to the matching campaign affected the distribution of my funds.