Thanks for the great post!
Regarding a “MIRI2,” at the Foundational Research Institute our goal is to research strategies for avoiding dystopian futures containing large amounts of suffering. We think that paperclip maximizers would create a lot of suffering.
I think there are good arguments against value-spreading.
I think that suffering focussed altruists should not try to increase existential risks due it being extremely uncooperative, because of the possibility of preventing large amounts of suffering in the future and also for reasons of moral uncertainty.
If you’re interested in reducing as much suffering as possible, you might like to get in touch with us at the Foundational Research Institute. Our mission is to reduce risks of astronomical suffering, or "s-risks."
I think that the recent issues surrounding DFS are significant. Luckily, not a huge amount of time has gone into networking and other work in DFS yet.
I'm hopeful that we can explore our personal connections further in this area, and get a better idea of the next things to do without investing a lot of time.
I'm more excited about our work that just started in trading and finance. This is an area with an enormous number of people with a lot of resources.
Hi everyone, I'm the Executive Director at REG.
We have a very busy weekend coming up, a presentation in Cambridge and one in London, followed by meetings with several poker players and finance professionals who are interested in REG.
I hope to reply to many of these comments on Monday. Thanks everyone for your interest, and Michael for writing this great post!
If you're based in Cambridge or London and would like to chat in person this weekend, email me at firstname.lastname@example.org
When I saw "let it go" as a link there was only one place I was hoping it would take me, was not disappointed.