AFA

asking_for_advice

30 karmaJoined Dec 2020

Comments
2

I'm a third year college student at a top US university studying math and computer science. I'm struggling to decide between pursuing a PhD in AI safety research or working at a quant firm/E2G. A third wildcard career path would be a data-informed policy role where I could use my quantitative skills to help policymakers, but I've struggled to find roles like this that are both high impact and technically interesting (would love some help with this!).

I will be working at a quant trading firm (one of Citadel,  Optiver,  etc.) next summer as a software engineer and I currently work in an AI research lab at school, so I'm well set to pursue both career paths. It's a question of which path is higher impact and will be the most rewarding for me. I'll try to list out some pros and cons of the PhD and E2G routes (ignoring data-driven policy roles for now because haven't found one of those jobs).

Quant Firm E2G Pros:

  • Potential for $1M+ donations within first 5-10 years
  • Great work life balance (<50 hours/week for the company I will be working at), perks, location, job security (again, specific to my company), and all around work environment
  • Guaranteed job offer; I've already passed the interview and finished all the prep work

Quant Firm E2G Cons:

  • Writing code long-term (20+ years) sounds incredibly draining
  • I know E2G is high impact but sometimes it doesn't feel that way; definitely feel like a sell out

AI PhD Pros:

  • Potentially super high impact, especially because I'm interested in reinforcement learning safety
  • Intellectually interesting; kind of a dream career

AI PhD Cons:

  • Success is not guaranteed: AI PhD programs are SUPER competitive. I would probably need to take a gap year and publish more papers before applying. Becoming a professor is even harder.
  • I love the idea of research and I have a lot of research experience, but the day-to-day work can be extremely frustrating; There's a very real risk of getting burnt out during my PhD
  • If I don't work in AI safety in particular, there's a risk that the work I do might have as many negative effects as positive effects 

I'd love to hear your thoughts on deciding between these roles as well as any ideas for the wildcard option that are worth exploring!