Hi! I'm Cullen. I've been a Research Scientist in the Policy team at OpenAI since August. I also am a Research Affiliate at the Centre for the Governance of AI at the Future of Humanity Institute, where I interned in the summer of 2018.
I graduated from Harvard Law School cum laude in May 2019. There, I led the Harvard Law School and Harvard University Graduate Schools Effective Altruism groups. Prior to that, I was an undergraduate at the University of Michigan, where I majored in Philosophy and Ecology & Evolutionary Biology. I'm a member of Giving What We Can, One For The World, and Founder's Pledge.
Some things I've been thinking a lot about include:
- How to make sure AGI benefits everyone
- Law and AI development
- Law's relevance for AI policy
- Whether law school makes sense for EAs
- Social justice in relation to effective altruism
I'll be answering questions periodically this weekend! All answers come in my personal capacity, of course. As an enthusiastic member of the EA community, I'm excited to do this! :D
[Update: as the weekend ends, I will be slower replying but will still try to reply to all new comments for a while!]
Limiting the discussion to the most impactful jobs from an EA perspective, I think it can be pretty hard for reasons I lay out here. I got lucky in many many ways, including that I was accepted to 80K coaching, turned out to be good at this line of work (which I easily could not have), and was in law school during the time when FHI was just spinning up its GovAI internship program.
My guess is that general credentials are probably insufficient without accompanying work that shows your ability to address the very unique issues of AGI policy well. So opportunities to try your hand at that are pretty valuable if you can find them.
That said, opportunities to show general AI policy capabilities—even on "short-term" issues—are good signals and can lead to a good career in this area!