Hi! I'm Cullen. I've been a Research Scientist in the Policy team at OpenAI since August. I also am a Research Affiliate at the Centre for the Governance of AI at the Future of Humanity Institute, where I interned in the summer of 2018.
I graduated from Harvard Law School cum laude in May 2019. There, I led the Harvard Law School and Harvard University Graduate Schools Effective Altruism groups. Prior to that, I was an undergraduate at the University of Michigan, where I majored in Philosophy and Ecology & Evolutionary Biology. I'm a member of Giving What We Can, One For The World, and Founder's Pledge.
Some things I've been thinking a lot about include:
- How to make sure AGI benefits everyone
- Law and AI development
- Law's relevance for AI policy
- Whether law school makes sense for EAs
- Social justice in relation to effective altruism
I'll be answering questions periodically this weekend! All answers come in my personal capacity, of course. As an enthusiastic member of the EA community, I'm excited to do this! :D
[Update: as the weekend ends, I will be slower replying but will still try to reply to all new comments for a while!]
What are your high-level goals for improving AI law and policy? And how do you think your work at OpenAI contributes to those goals?
My approach is generally to identify relevant bodies of law that will affect the relationships between AI developers and other relevant entities/actors, like:
Much of this is governed by well-developed areas of law, but in very unusual (hypothetical) cases. At OpenAI I look for edge cases in these areas. Specifically I collaborate with technical experts who are working on the cutting edge of AI R&D to identify these issues more clearly. OpenAI empowers me and the Policy team so that we can guide the org to proactively address these issues.