Hello Effective Altruism Forum, I am Nate Soares, and I will be here to answer your questions tomorrow, Thursday the 11th of June, 15:00-18:00 US Pacific time. You can post questions here in the interim.
Last week Monday, I took the reins as executive director of the Machine Intelligence Research Institute. MIRI focuses on studying technical problems of long-term AI safety. I'm happy to chat about what that means, why it's important, why we think we can make a difference now, what the open technical problems are, how we approach them, and some of my plans for the future.
I'm also happy to answer questions about my personal history and how I got here, or about personal growth and mindhacking (a subject I touch upon frequently in my blog, Minding Our Way), or about whatever else piques your curiosity. This is an AMA, after all!
EDIT (15:00): All right, I'm here. Dang there are a lot of questions! Let's get this started :-)
EDIT (18:00): Ok, that's a wrap. Thanks, everyone! Those were great questions.
Let's assume that an AGI is, indeed, created sometime in the future. Let us also assume that MIRI achieves its goal of essentialy protecting us from the existential dangers that stem from it. My question may well be quite naive, but how likely is it for a totalitarian "New World Order" to seize control of said AGI and use it for their own purposes, deciding who gets to benefit from it and to what degree?
This is something I, myself, get asked a lot and while it takes into account the current state of society which look nothing like the next ones probably will, I can't seem to properly reject as a possibilty.
I wouldn't reject it as a possibility. MIRI wants AGI to have good consequences for human freedom, happiness, etc., but any big increase in power raises the risk that the power will be abused. Ideally we'd want the AI to resist being misused, but there's a tradeoff between 'making the AI more resistant to misuse by its users (when the AI is right and the user is wrong)' and 'making the AI more amenable to correction by its users (when the AI is wrong and the user is right).'
I wouldn't say it's inevitable either, though. It doesn't appear to me that past technological growth has tended to increase how totalitarian the average state is.