Sorted by New

Wiki Contributions


AMA: Ajeya Cotra, researcher at Open Phil

Hello! I really enjoyed your 80,000 Hours interview, and thanks for answering questions!

1 - Do you have any thoughts about the prudential/personal/non-altruistic implications of transformative AI in our lifetimes? 

2 - I find  fairness agreements between worldviews unintuitive but also intriguing. Are there any references you'd suggest on fairness agreements besides the OpenPhil cause prioritization update