Hi! I'm Cullen. I've been a Research Scientist in the Policy team at OpenAI since August. I also am a Research Affiliate at the Centre for the Governance of AI at the Future of Humanity Institute, where I interned in the summer of 2018.
I graduated from Harvard Law School cum laude in May 2019. There, I led the Harvard Law School and Harvard University Graduate Schools Effective Altruism groups. Prior to that, I was an undergraduate at the University of Michigan, where I majored in Philosophy and Ecology & Evolutionary Biology. I'm a member of Giving What We Can, One For The World, and Founder's Pledge.
Some things I've been thinking a lot about include:
- How to make sure AGI benefits everyone
- Law and AI development
- Law's relevance for AI policy
- Whether law school makes sense for EAs
- Social justice in relation to effective altruism
I'll be answering questions periodically this weekend! All answers come in my personal capacity, of course. As an enthusiastic member of the EA community, I'm excited to do this! :D
[Update: as the weekend ends, I will be slower replying but will still try to reply to all new comments for a while!]
For the average law school grad, what specific knowledge is most important to develop for working in AI Policy?
How to implement ML? A conceptual understanding of the history of ML? Math, like linear algebra? Coding or computer science more generally? Considerations around AI forecasting & AI risk? Current work on AI policy or technical safety? Histories of revolutionary technologies?
Hard to imagine it ever being too much TBH. I and most of my colleagues continue to invest in AI upskilling. However, lots of other skills are worth having too. Basically, I view it as a process of continual improvement: I will probably never have "enough" ML skill because the field moves faster than I can keep up with it, and there are approximately linear returns on it (and a bunch of other skills that I've mentioned in these comments).