Hello Effective Altruism Forum, I am Nate Soares, and I will be here to answer your questions tomorrow, Thursday the 11th of June, 15:00-18:00 US Pacific time. You can post questions here in the interim.
Last week Monday, I took the reins as executive director of the Machine Intelligence Research Institute. MIRI focuses on studying technical problems of long-term AI safety. I'm happy to chat about what that means, why it's important, why we think we can make a difference now, what the open technical problems are, how we approach them, and some of my plans for the future.
I'm also happy to answer questions about my personal history and how I got here, or about personal growth and mindhacking (a subject I touch upon frequently in my blog, Minding Our Way), or about whatever else piques your curiosity. This is an AMA, after all!
EDIT (15:00): All right, I'm here. Dang there are a lot of questions! Let's get this started :-)
EDIT (18:00): Ok, that's a wrap. Thanks, everyone! Those were great questions.
I'll take a stab at this question too.
There are two different schools of thought about what the goal of AI as a field is. One is that the goal is to build a machine that can do everything humans can -- possibly including experiencing emotions and other conscious states. On this view, a "full AI" would plausibly be a person, deserving of moral rights like any other.
The more common view within contemporary AI is that the goal of AI is to build machines that can effectively achieve a variety of practical goals in a variety of environments. Think Nate's Deep Blue example, but generalized: instead of steering arrangements of chess pieces on a board toward some goal state, a "full" AI steers arbitrary arrangements of objects in space toward some goal state. Such an AI might not be conscious or have real preferences; it might have "goals" only in the limited sense that Deep Blue has "goals." This is the kind of AI MIRI has in mind, and the kind we're trying to plan for: a system that can draw inferences from sensor inputs and execute effective plans, but not necessarily one that has more moral weight than Google's search engine algorithms do.
If it turns out that you do need to make AI algorithms conscious in order to make them effective at scientific and engineering tasks, that does make our task a lot harder, because, yes, we'll have to take into account the AI's moral status when we're designing it, and not just the impact its actions have on other beings. For now, though, consciousness and intelligent behavior look like different targets, and there are obvious economic reasons why mainstream AI is likely to prioritize "high-quality decision making" over "emulating human consciousness."
A better analogy to MIRI's goal than "we build Hitler and then put him in chains" is "we build a reasonably well-behaved child and teach the child non-Hitler-ish values." But both of those ways of thinking are still excessively anthropomorphized. A real-world AI, of the "high-quality decision-making" sort, may not resemble a human child any more closely than the earliest airplanes resembled a baby bird.
For more information about this, I can recommend Stuart Russell's talk on the future of AI: https://www.youtube.com/watch?v=GYQrNfSmQ0M