I'm a coterm in Computer Science at Stanford, which is a graduate degree. I did my undergrad here as well in Symbolic Systems.
I recently started doing the 80,000 hours career planning course.
I'm most compelled by suffering-based ethics, though I still find negative utilitarianism unsatisfactory on a number of edge cases. This makes me less worried about X-risks and by extension less long-termist than seems to be the norm.
My shortlist of cause areas is the following:
though I remain uncertain, especially about the following things:
I am looking for employment once I graduate, which will be in December 2022.
I have experience teaching AI, so I can help answer questions about some of the fundamentals.
Does this work if you travel from out of state?
What do you think would be most people's cutoff? My guess would be that most people see the two types of suffering as qualitatively different such that no amount of insect suffering is comparable to human suffering.
How did you measure efficacy?
How many AI Safety researchers would be enough? 80k emphasizes the fact that there are only 300 people working on this full-time, meaning that the problem is extremely neglected. How many people would have to be working on this problem for it to no longer be considered neglected?
Why stop at +400? Doesn't the ev of the free bet continue to rise as the odds get longer? Is it just because the longer odds run into the bet caps? Edit: also for the free bets, wouldn't you want to make many small ones to lower the variance?