Hide table of contents

90% of the content of this post is to endorse, in the strongest terms, Schelling's Lecture [transcript], and to note its clear and direct implications for AI coöperation.

When you read arguments about our inability to stop AGI, they sound like arguments about inability to stop nuclear war.

Even Von Neumann believed war was inevitable for game-theoretic reasons—the first to strike suffered less, and the probability of a strike increased to 1 as time went on. How could war not happen—and soon?[1]

AI is expensive

Nuclear weapons are like AI: they are both expensive and this limits the number of actors that can build them. The actors—states, large companies—usually need buy-in, implicit or explicit, and are beholden to shareholders, voters, regulators. Even in autocratic countries, public opinion matters. And even autocratic countries can exhibit a large degree of coöperation with foreign countries (cf. Russian grain exports).

What is the Schelling Point?

Directly applying Schelling's logic, the way to avoid nuclear escalation is to have a bright line, something that is queriable about a model in the same way you can ask the question, "is this military nuclear technology?". What is the line in AI? I don't have the answer here.

There's an obvious candidate. Big. All modern models that have made astonishing progress are big. There are fundamental theorems about the exponential computational difficulty of even specific AI tasks, like playing optimal Go. A highly non-expert observer without direct access to a model can ask, "how big is it?" As long as the answer is truthful, it gives him an idea about how "dangerous" the system is in a way that is hard to work around. It might just not be possible to build dangerous AI systems under a certain scale—or it might be possible only via large systems.

If you feel the urge to comment, it would make me very happy if you proposed other Schelling Points.

Public Opinion

Public opinion was the second driver of Schelling's argument. Good news on that front. How does the average person feel about AI? Bad.

We've got a  skewed view, a lot more positive than the average person, and more positive than we think, for much the same reasons that we tend to have more positive views about science/medicine/technology—well explained in Ivermectin: Much More than You Wanted to Know.

It's hard to get numbers, but Pew is a start. Would it surprise you to know that Americans think self-driving cars are more bad for society rather than good, by a 2:1 margin?

The average person is more suspicious of AI than "us", by an even larger margin than most of "us" probably think.

TLDR

Maybe this doesn't update your probability, P(AGI happens | AGI is possible but people realize it's bad). It updated mine.

But if there's a chance we can coöperate, and it's worth trying to, Schelling's lecture—and nuclear war—are worth thinking hard about.

  1. ^

    https://cs.stanford.edu/people/eroberts/courses/soco/projects/1998-99/game-theory/neumann.html

2

0
0

Reactions

0
0
No comments on this post yet.
Be the first to respond.