Is helpful/friendly :-) Loves to learn. Wants to solve neglected problems. See website for current progress.
I'm very interested in talking to biosecurity experts about neglected issues with: microneedle array patches, self-spreading animal/human vaccines, paper-based (or other cheap) microfluidic diagnostics, and/or massively-scalable medical countermeasure production via genetic engineering.
Also, interested in talking to experts on early childhood education and/or positive education!
Reach out if you have questions about:
I'll respond to Linkedin the fastest :-)
"There are many other things that could have been done to prevent Russia’s unprovoked, illegal attack on Ukraine. Ukraine keeping nuclear weapons is not one of them."
Context: I'm hoping to learn lessons in nuclear security that are transferable to AI safety and biosecurity.
Question: Would you have any case studies or advice to share on how regulatory capture and lobbying was mitigated in US nuclear security regulations and enforcement?
Are there any misconceptions, stereotypes, or tropes that you commonly see in academic literature around nuclear security or biosecurity that you could correct given your perspective inside government?
Could you share the top 3 constraints and benefits you had in improving global nuclear security while you were working for the US DoD compared to now, when you're working as an academic?
Context: I'm hoping to find lessons from nuclear security that are transferable to the security of bioweapons and transformative AI.
Question: Are there specific reports you could recommend on prevening these nuclear security risks:
Any updates on how the event went? :-) Any cause priorities or research questions identified to mitigate existential cybersecurity risks?
A lot of people have gotten the message: "Direct your career towards AI Safety!" from EA. Yet there seem to be way too few opportunities to get mentorship or a paying job in AI safety. (I say this having seen others' comments on the forum and applied to 5+ fellowships personally where there were 500-3000% more applicants than spots).
What advice would you give to those feeling disenchanted by their inability to make progress in AI safety? How is 80K hours working to better (though perhaps not entirely) balance the supply and demand for AI safety mentorship/jobs?
For what it's worth, I run an EA university group outside of the U.S (at the University of Waterloo in Canada). I haven't observed any of the points you mentioned in my experience with the EA group:
Which university EA groups specifically did you talk to before proclaiming "University EA Groups Need Fixing"? Based only on what I read in your article, a more accurate title seems to be "Columbia EA Needs Fixing"
Out of curiosity @LondonGal, have you received any followups from HLI in response to your critique? I understand you might not be at liberty to share all details, so feel free to respond as you feel appropriate.
Here's a summary of the report from Claude-1 if someone's looking for an 'abstract':