All of Samuel's Comments + Replies

AMA: Ajeya Cotra, researcher at Open Phil

Hello! I really enjoyed your 80,000 Hours interview, and thanks for answering questions!

1 - Do you have any thoughts about the prudential/personal/non-altruistic implications of transformative AI in our lifetimes? 

2 - I find  fairness agreements between worldviews unintuitive but also intriguing. Are there any references you'd suggest on fairness agreements besides the OpenPhil cause prioritization update

3Ajeya8moThanks, I'm glad you enjoyed it! 1. I haven't put a lot of energy into thinking about personal implications, and don't have very worked-out views right now. 2. I don't have a citation off the top of my head for fairness agreements specifically, but they're closely related to "variance normalization" approaches to moral uncertainty, which are described here [https://www.lesswrong.com/posts/dX7vNKg4vex5vxWCW/making-decisions-under-moral-uncertainty-1#Normalised_MEC_and_Variance_Voting] (that blog post links to a few papers).