Hide table of contents

I have been reading Eric Drexler’s writing on the future of AI for more than a decade at this point. I love it, but I also think it can be tricky or frustrating.

More than anyone else I know, Eric seems to tap into a deep vision for how the future of technology may work — and having once tuned into this, I find many other perspectives can feel hollow. (This reminds me of how, once I had enough of a feel for how economies work, I found a lot of science fiction felt hollow, if the world presented made too little sense in terms of what was implied for off-screen variables.)

One cornerstone of Eric’s perspective on AI, as I see it, is a deep rejection of anthropomorphism. People considering current AI systems mostly have no difficulty understanding it as technology rather than person. But when discussion moves to superintelligence … well, as Eric puts it:

Our expectations rest on biological intuitions. Every intelligence we’ve known arose through evolution, where survival was a precondition for everything else—organisms that failed to compete and preserve themselves left no descendants. Self-preservation wasn’t optional—it was the precondition for everything else. We naturally expect intelligence bundled with intrinsic, foundational drives.

Anyhow, I think there's a lot to get from Eric’s writing — about the shape of automation at scale, the future of AI systems, and the strategic landscape. So I keep on recommending it to people. But I also feel like people keep on not quite knowing what to do with it, or how to integrate it with the rest of their thinking. So I wanted to provide my perspective on what it is and isn’t, and thoughts on how to productively spend time reading. If I can help more people to reinvent versions of Eric’s thinking for themselves, my hope is that they can build on those ideas, and draw out the implications for what the world needs to be doing.

If you’ve not yet had the pleasure of reading Eric’s stuff, his recent writing is available at AI Prospects. His most recent article explains how a lot of his thinking fits together, and may be good to give you a rough orientation (or see below for more of my notes) — but then I’d advise choosing some part that catches your interest, and diving into the linked material. 

Difficulties with Drexler’s writing

Let’s start with the health warnings:

  1. It’s abstract.
  2. It’s dense.
  3. It often implicitly challenges the concepts and frames we use to think about AI.
  4. It shies away from some questions.

These properties aren’t necessarily bad. Abstraction permits density, and density means it’s high value-per-word. Ontological challenge is a lot of the payload. But they do mean that it can be hard work to read and really get value from.

Correspondingly, there are a couple of failure modes to watch for:

  • Perhaps you’ll find your eyes glazing over — you might stop reading, or might finish skimming an article and then realise you don’t really know what it was saying.
  • Perhaps you’ll think it’s saying [claim], which is dumb because [obvious reason][1].

How to read Drexler

Some mathematical texts are dense, and the right way to read them is slowly and carefully — making sure that you have taken the time to understand each sentence and each paragraph before moving on.

I do not recommend the same approach with Eric’s material. A good amount of his content can amount to challenging the ontologies of popular narratives. But ontologies have a lot of supporting structure, and if you read just a part of the challenge, it may not make sense in isolation. Better to start by reading a whole article (or more!), in order to understand the lay of the land.

Once you’ve (approximately) got the whole picture, I think it’s often worth circling back and pondering more deeply. Individual paragraphs or even sentences in many cases are quite idea-dense, and can reward close consideration. I’ve benefited from coming back to some of his articles multiple times over an extended period.

Other moves that seem to me to be promising for deepening your understanding:

  1. Try to understand it more concretely. Consider relevant examples[2], and see how Eric’s ideas apply in those cases, and what you make of them overall.

  2. Try to reconcile apparent tensions. If you feel like Eric is presenting something with some insight, but there’s another model you have which on the face of it has some conflicting insight, see if you can figure out the right way to unify the perspectives — perhaps by limiting the scope of applicability of one of the models.

What Drexler covers

In my view, Eric’s recent writing is mostly doing three things:

1) Mapping the technological trajectory 

What will advanced AI look like in practice? Insights that I’ve got from Eric’s writing here include:

2) Pushing back on anthropomorphism

If you talk to Eric about AI risk, he can seem almost triggered when people discuss “the AI”, presupposing a single unitary agent. One important thread of his writing is trying to convey these intuitions — not that agentic systems are impossible, but that they need not be on the critical path to transformative impacts.

My impression is that Eric’s motivations for pushing on this topic include:

3) Advocating for strategic judo

Rather than advocate directly for “here’s how we handle the big challenges of AI” (which admittedly seems hard!), Eric pursues an argument saying roughly that:

So rather than push towards good outcomes, Eric wants us to shape the landscape so that the powers-that-be will inevitably push towards good outcomes for us.

The missing topics

There are a lot of important questions that Eric doesn’t say much about. That means that you may need to supply your own models to interface with them; and also that there might be low-hanging fruit in addressing some of these and bringing aspects of Eric’s worldview to bear.

These topics include[4]:

  • Even if there are lots of powerful non-agentic AI systems, what about the circumstances where people would want agents?
  • What should we make of the trend towards very big models so that only a few players can compete? How much should we expect economic concentration at various points in the future?
  • Which of the many different kinds of impact he’s discussing should we expect to happen first?
  • How might a hypercapable world of the type he points to go badly off the rails?
  • What are the branches in the path, and what kinds of action might have leverage over those branches?
  • What kind of technical or policy work would be especially valuable?

Translation and reinvention

I used to feel bullish on other people trying to write up Eric’s ideas for different audiences. Over time, I’ve soured on this — I think what’s needed isn’t just a matter of translating simple insights, and more for people to internalize those insights, and then share the fruits.

In practice, this blurs into reinvention. Just as mastering a mathematical proof means comprehending it to the point that you can easily rederive it (rather than just remembering the steps), I think mastering Eric’s ideas is likely to involve a degree of reinventing them for yourself and making them your own. At times, I’ve done this myself[5], and I would be excited for more people to attempt it.

In fact, this would be one of my top recommendations for people trying to add value in AI strategy work. The general playbook might look like:

  1. Take one of Eric’s posts, and read over it carefully
  2. Think through possible implications and/or tensions — potentially starting with one of the “missing topics” listed above, or places where it most seems to be conflicting with another model you have
  3. Write up some notes on what you think
  4. Seek critique from people and LLMs
  5. Iterate through steps 2–4 until you’re happy with where it’s got to

Pieces I’d be especially excited to see explored

Here’s a short (very non-exhaustive) list of questions I have, that people might want to bear in mind if they read and think about Eric’s perspectives:

  • What kind of concrete actions would represent steps towards (or away from) a Paretotopian world?
  • What would the kind of “strategic transformation” that Eric discusses look like in practice? Can we outline realistic scenarios?
  • Given the perspectives in AI safety without trusting AI, in what conditions should we still be worried about misalignment? What would be the implications for appropriate policies of different actors?
  • If Eric is right about Large Knowledge Models and latent space, what will be the impacts on model transparency, compared to current chain-of-thought in natural language? What should we be doing now on account of that? (And also, to what extent is he right?)
  • What do our actual choices look like around what to automate first? What would make for good choices?
  1. ^

     When versions of this occur, I think it’s almost always that people are misreading what Eric is saying — perhaps rounding it off into some simpler claim that fits more neatly into their usual ontology. This isn’t to say that Eric is right about everything, just that I think dismissals usually miss the point. (Something similar to this dynamic has I think been repeatedly frustrating to Eric, and he wrote a whole article about it.) I am much more excited to hear critiques or dismissals of Drexler from people who appreciate that he is tracking some important dynamics that very few others are.

  2. ^

     Perhaps with LLMs helping you to identify those concrete examples? I’ve not tried this with Eric’s writing in particular, but I have found LLMs often helpful for moving from the abstract to the concrete.

  3. ^

     This isn’t a straight prediction of how he thinks AI systems will be built. Nor is it quite a prescription for how AI systems should be built. His writing is one stage upstream of that — he is trying to help readers to be alive to the option space of what could be built, in order that they can chart better courses.

  4. ^

     He does touch on several of these at times. But they are not his central focus, and I think it’s often hard for readers to take away too much on these questions.

  5. ^

     Articles on AI takeoff and nuclear war and especially Decomposing Agency were the result of a bunch of thinking after engaging with Eric’s perspectives. (Although I had the advantage of also talking to him; I think this helped but wasn’t strictly necessary.)

  6. Show all footnotes

28

0
0

Reactions

0
0

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities