I'm a coterm in Computer Science at Stanford, which is a graduate degree. I did my undergrad here as well in Symbolic Systems.
I recently started doing the 80,000 hours career planning course.
I'm most compelled by suffering-based ethics, though I still find negative utilitarianism unsatisfactory on a number of edge cases. This makes me less worried about X-risks and by extension less long-termist than seems to be the norm.
My shortlist of cause areas is the following:
though I remain uncertain, especially about the following things:
I am looking for employment once I graduate, which will be in December 2022.
I have experience teaching AI, so I can help answer questions about some of the fundamentals.
What do you think would be most people's cutoff? My guess would be that most people see the two types of suffering as qualitatively different such that no amount of insect suffering is comparable to human suffering.
How many AI Safety researchers would be enough? 80k emphasizes the fact that there are only 300 people working on this full-time, meaning that the problem is extremely neglected. How many people would have to be working on this problem for it to no longer be considered neglected?
Do you think there's a number you would accept for how many people treated with psychotherapy would be "worth" the death of one child?