I'm a coterm in Computer Science at Stanford, which is a graduate degree. I did my undergrad here as well in Symbolic Systems.
I recently started doing the 80,000 hours career planning course.
I'm most compelled by suffering-based ethics, though I still find negative utilitarianism unsatisfactory on a number of edge cases. This makes me less worried about X-risks and by extension less long-termist than seems to be the norm.
My shortlist of cause areas is the following:
though I remain uncertain, especially about the following things:
I am looking for employment once I graduate, which will be in December 2022.
I have experience teaching AI, so I can help answer questions about some of the fundamentals.