Douglas Hofstadter is best known for authoring Godel, Escher, Bach, a book on artificial intelligence (among other things) which is sort of a cult classic. In a recent interview, he says he's terrified of recent AI progress and expresses beliefs similar to many people who focus on AI xrisk.

Hoftstadter: The accelerating progress has been so unexpected that it has caught me off guard... not only myself, but many many people. There's a sense of terror akin to an oncoming tsunami that could catch all of humanity off guard. It's not clear whether this could mean the end of humanity in the sense of the systems we've created destroying us, it's not clear if that's the case but it's certainly conceivable. If not, it's also that it just renders humanity a small, almost insignificant phenomenon, compared to something that is far more intelligent and will become as incomprehensible to us as we are to cockroaches.

Interviewer: That's an interesting thought.

Hoftstadter: Well I don't think it's interesting. I think it's terrifying. I hate it.

I think this is the first time he's publicly expressed this, and his views seem to have changed recently. Previously he published this which listed a bunch of silly questions GPT-3 gets wrong and concluded that

There are no concepts behind the GPT-3 scenes; rather, there’s just an unimaginably huge amount of absorbed text upon which it draws to produce answers

though it ended with a gesture to the fast pace of change and inability to predict the future. I randomly tried some of his stumpers on GPT-4 and it gets them right (and I remember being convinced when this came out that GPT-3 could get them right too with a bit of prompt engineering, though I don't remember specifics).

I find this a bit emotional because of how much I loved Godel, Escher, Bach in early college. It was my introduction to "real" math and STEM, which I'd previously disliked and been bad at; because of this book, I majored in computer science. It presented a lot of philosophical puzzles for and problems with AI, and gave beautiful, eye-opening answers to them. I think Hofstadter expected us to understand AI much better before we got to this level of capabilities; expected more of the type of understanding his parables and thought experiments could sometimes create.

Now I work professionally on situations along the lines of what he describes in the interview (and feel a similar way about them) — it's a weird way to meet Hofstadter again.

See also Gwern's post on LessWrong.

64

0
0

Reactions

0
0
Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities