JLR

Jose Luis Ricon

11 karmaJoined Aug 2022

Comments
2

Hi, I'm the author of Nintil.com (We met at Future Forum :)

Essentially, an essential rule in argumentation is that the premises have to be more plausible than the conclusion. For many people, foom scenarios, nanotech, etc makes them switch off. 

 

I have this quote

Here I want to add that the lack of criticism is likely because really engaging with these arguments requires an amount of work that makes it irrational for someone who disagrees to engage. I make a similar analogy here with homeopathy: Have you read all the relevant homeopathic literature and peer-reviewed journals before dismissing it as a field? Probably not. You would need some reason to believe that you are going to find evidence that will change your mind in that literature. In the case of AI risk, the materials required to get someone to engage with the nanotech/recursive self-improvement cases should include sci-fi free cases for AI risk (Like the ones I gesture at in this post) and perhaps tangible roadmaps from our current understanding to systems closer to Drexlerian nanotech (Like Adam Marblestone's roadmap).

Basically, you can't tell people "Nanotech is super guaranteed to happen, check out this book from Drexler". If they don't agree with that, they won't read it, there is too much commitment. Instead, one should just start from premises the average person agrees with (speed, memory, strategic planning) and get to the "AI risk is worth taking seriously". That is a massive step up from them laughing at it. Then one can argue about timelines and how big of a risk it is, but first one has to bring them into the conversation, and my arguments accomplishes (I hope) that.