Hide table of contents

Hi folks. I was reviewing prior intros to AI risk/AI danger/AI catastrophes, and I believe they tend to overcomplicate things in at one of 3 ways:

  1. They have too many extraneous details
  2. They appeal to overly complex analogies, or
  3. They seem to spend much of their time responding to insider debates and comes across as shadow-boxing objections.

Additionally, three other weaknesses are common:

4. Often they have "meta" stuff prominently in the text. Eg, "this is why I disagree with Yudkowsky", or "here's how my argument differs from other AI risk arguments." I think this makes for a worse reader experience.

5. Often they "sound like science fiction." I think this plausibly was correct historically but in the year 2026 they don't need to be.

6. Often they reference too much insider jargon and language that makes the articles inaccessible to people who aren't familiar with AI, aren't familiar with the nascent AI Safety literature, aren't familiar with rationalist jargon, or all three.

To resolve these problems, I tried my best to write an article that lays out the simplest case for AI catastrophe without making those mistakes: https://linch.substack.com/p/simplest-case-ai-catastrophe 

I did not fully succeed in my goal. In particular, I ended up including some extraneous details anyway (#1) because that's my natural writing style, and I couldn't figure out how to make the article entertaining and interesting without that. I also ended up shadowboxing sometimes  (#3). The ideal essay in this format preempts objections so subtly readers shouldn't even notice, rather than address them explicitly. I was able to do this for a few common objections, but doing it for all the common objections was outside of my skill level. Also my language (#6) had less jargon than many prior offerings, but far from no jargon.

(If I were to rewrite the article in a few months, I might try to have more messaging/wordcount discipline and make the arguments more compact).

Nonetheless I believe that this article is plausibly the best intro to AI risk for at least some people.[1] 

If I'm right, I'd love for the article to be shared! (And if I'm wrong I'd love to know why, and see if there are fixable issues!)

I'm very interested in what people think! Particularly people who are rather unfamiliar the AI risk arguments yet and don't buy them, or people who regularly talk to newcomers (eg people teaching or TAing college Intro AI safety courses)

  1. ^

    In an analogous way to how my Ted Chiang review is arguably the best review of Ted Chiang's writings, or how my intro to stealth is the best intro to stealth technologies for laymen. Fwiw my personal subjective impression is that my Ted Chiang review is clearly the best Chiang review, while the stealth article and the current AI risk article is much more debatable (many EAs are competent writers).

13

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
More from Linch
Curated and popular this week
Relevant opportunities