CEO of Fathom Computing, a startup building optics-based computing hardware aiming at beneficial AGI. I do fundraising, recruiting, long-term strategy, especially related to AI safety, culture (I set our company up as a public benefit corporation), as well as lead some of the technical areas, especially multicore fiber. Current top cause area interest: AI safety, longtermism, cause prioritization, and I'm working on a utility equation mostly from a computational perspective.


New infographic based on "The Precipice". any feedback?

Thanks for the tip!  I'll change to existential risks