All of Samuel's Comments + Replies

Hello! I really enjoyed your 80,000 Hours interview, and thanks for answering questions!

1 - Do you have any thoughts about the prudential/personal/non-altruistic implications of transformative AI in our lifetimes? 

2 - I find  fairness agreements between worldviews unintuitive but also intriguing. Are there any references you'd suggest on fairness agreements besides the OpenPhil cause prioritization update

3
Ajeya
3y
Thanks, I'm glad you enjoyed it! 1.  I haven't put a lot of energy into thinking about personal implications, and don't have very worked-out views right now. 2. I don't have a citation off the top of my head for fairness agreements specifically, but they're closely related to "variance normalization" approaches to moral uncertainty, which are described here (that blog post links to a few papers).