TLDR:  You are invited to make suggestions for ideas/information/framing for an upcoming AI safety book for a non-technical audience. 

Context:
I'm still writing an accessible book about AI Safety/Risk for a non-technical audience to serve both the AI safety cause and the community. The book's intended audience is likely not people who read this forum but rather your friends, family, policy makers, and non-science people who are curious about the topic.  I started last June, it is pretty far along, and I hope to have it available within the next three months (I received a LTFF grant last year to help it come into existence). 

Briefly, the purpose of the book is to communicate that intelligence is really powerful, AI progress is happening fast and AI systems are becoming more intelligent/powerful, that advanced AI is a threat/risk to humanity because it may not be aligned with our values and may be uncontrollable, therefore we should act now to reduce the risk.  

Opportunity:
You can present ideas, facts, framing, or anything else you think would be important in such a book in the comments below or send me a message.  
If interested, you may still be wondering what I've already included. Broadly, as a heuristic, if your idea is very obvious, I'm probably already including it. But it could be still be useful for you to suggest it, so I can see that others think it is important. 
If your idea is highly technical, I have likely chosen not to include it. But it could still be useful to suggest if it is a key consideration that can be made more accessible. I'm trying to open-minded but efficient with people's time.  
I am also trying to minimize the occurrence of someone saying "I really wish he had mentioned X" after the book comes out.  No promises of inclusion but at least your suggestions will be considered. 

Finally, I'm more than happy to have people be more involved as getting feedback from a range of knowledgeable people is useful for a variety of reasons. 

(Cross posted from LessWrong)

11

0
0

Reactions

0
0
Comments4
Sorted by Click to highlight new comments since: Today at 9:51 AM

I have a framing that you might find interesting:

I guess if I was trying to make an argument that we should be worried with minimal assumptions, I’d argue as follows:

  • AI will lead to the creation of dangerous capacities
  • The only possible defence against AI systems will be other AI systems acting with a substantial degree of autonomy. If these systems malfunction or turn against us we will be screwed.
  • AI arms races will force us to deploy these systems fast and without proper testing. This probably results in us being screwed.
  • Any given alignment technique is likely to break under a sufficient amount of optimisation pressure. Since we won’t have much time to develop new techniques on the fly, we are likely screwed again if we haven’t developed techniques for aligning powerful systems ahead of time.

Thank you. 
I quite like the "we don't have a lot of time" part, both in the fact that we'd need to prepare in advance, and because making decisions under time pressure is almost always worse. 

I think that by far the less intuitive thing about AI X-risk is why AIs would want to kill us instead of doing what they would be "programmed" to do.

I would give more importance to that part of the argument than the "intelligence is really powerful" part.

Noted.  I find many are stuck on the 'how'.    That said, some polls have 2/3rds or 3/4ths of people consider AI might harm humanity, so it isn't entirely clear who needs to hear which arguments/analysis. 

Curated and popular this week
Relevant opportunities