Darren McKee

188 karmaJoined Jun 2022


Great post. I can't help but agree the broad idea given that I'm just finishing up a book that has the main goal of raising awareness of AI safety to a broader audience. Non-technical, average citizens, policy makers, etc. Hopefully out in November. 

I'm happy your post exists even if I have (minor?) differences on strategy. Currently, I believe the US Gov sees AI as a consumer item so they link it to innovation and economic good and important things. (Of course, given recent activity, there is some concern about the risks).   As such, I'm advocating for safe innovation with firm rules/regs that enable that.  If those bars can't be met, then we obviously shouldn't have unsafe innovation.   I sincerely want good things from advanced AI, but not if it will likely harm everyone. 

Thank you. 
I quite like the "we don't have a lot of time" part, both in the fact that we'd need to prepare in advance, and because making decisions under time pressure is almost always worse. 

Noted.  I find many are stuck on the 'how'.    That said, some polls have 2/3rds or 3/4ths of people consider AI might harm humanity, so it isn't entirely clear who needs to hear which arguments/analysis. 

Great post!

A and B about 30 years are useful ideas/talking points. Thanks for the reminder/articulation!

I'm definitely aware of that complication but I don't think that is the best way to broader impact. Uncertainty abounds.  If I can get it out in 3 months, I will. 

Thanks for sharing, this and the others.  I read that one and it was a bit more about the rationality community than the risks. (It's in the list with a different title)

FYI, I'm working on a book about the risks of AGI/ASI for a general and I hope to get it out within 6 months. It likely won't be as alarmist as your post but will try to communicate the key messages, the importance, the risks, and the urgency. Happy to have more help. 

Thank you for a great post and the outreach you are doing.  We need more posts and discussions about optimal framing. 

I was referring to external credibility if you are looking for a scientific paper with the key ideas. Secondarily, an online, modular guide is not quite the frame of the book either (although it could possible be adapted towards such a thing in the future)

Interesting points. I'm working on a book which is not quite a solution to your issue but hopefully goes in the same direction. 
And I'm now curious to see that memo :)

Load more