Here's a new two-part editorial on the future of artificial intelligence by Tim Urban, science blog Wait But Why.

The first part has 22,000 shares on social media (and probably hundreds of thousands of views).

The second and final part was released today.

In my personal opinion, although essays by Eliezer, Luke and Stuart (Armstrong) have been extremely useful introductions to artificial intelligence, I think this is likely to be the go-to popular introduction to AI risk for the next few years. (with Bostrom's Superintelligence an ideal book-length counterpart).


New comment
7 comments, sorted by Click to highlight new comments since: Today at 4:30 AM

Thanks for sharing this, Ryan. I would have missed it otherwise (obviously not following the right people on social media). Looking forward to reading it! Hopefully see you at another CSER seminar soon.

Yes, see you!

These articles are indeed a great introduction to the issue. However, I would say the book-length counterpart is James Barrat – Our Final Invention because it is more accessible than Bostrom's Superintelligence.

Just read it. Previous belief (not rational, just background) - SAI is unlikely to happen but if it does it will probably wipe us all out. We should try and stop it happening. New suspicions after reading the article: SAI is actually quite likely to happen. Sh*t. And even the positive stories look like an aweful way to organise society.

How much do people buy these AI expert opinion polls saying General Artifical Intelligence will happen in the next 30 years or so, and that Super Artificial Intelligence will happen in the next 60 years or so?

Has anyone else thought that the people debating AI, especially championing it as a technology, have a very particular and deviant view of the good life / ultimate moral value?

Interesting, thanks for the link!

What's been the general tone of the comments?

Mostly positive, and pretty shocked, but also a mixture of other responses.