Hide table of contents

Don't settle for less than 3 galaxies.

In March 1493, Columbus returned from his voyage to the Americas. Within 15 months, Spain and Portugal finished negotiating a treaty that partitioned the Americas between them. 

There wasn't even consensus at the time that Columbus had found a new continent (he insisted until his death it was Asia)[1]. They had no maps either, just the accounts of a single voyage to the Caribbean. 

Still, they moved fast to establish a line dividing the non-Christian world between them (any territory ruled by a Christian monarch was off limits).  Spain got the west, Portugal the east. This is why today they speak Portuguese in Brazil. 

As we approach AI takeoff[2], we may also end up making various decisions about the future that:

  1. Happen really fast[3].
  2. Have enormous (cosmic), long-term consequences.
  3. Rest on poor empirical knowledge, broken ontologies, and outdated moral frameworks. 

Long before we start disassembling Mercury, building matrioshka brains, or launching von Neumann probes, the groups in power may move quickly to negotiate the terms for what we do with the long-term future in ways that could be very hard to reverse

A few thoughts on how to prepare for analogous points of no return:

Lock-in moments

The Constitution for the Future.

How does anything actually get locked in? Historically, there are a few individual moments that seem especially "sticky", relative to baseline:

  • Treaties: Powerful actors settle their disputes in writing.
  • Constitutions: Rules are established for long-term power sharing and rights.
  • Power transitions: A new group takes power, cementing a new ideology/value system. 

Looking concretely at the near term future, there are a few examples of moments we could look out for:

  1. A big international convention is held to decide how to govern the AI transition. US/Chinese governments, frontier labs, etc.
  2. Frontier labs hand off control of their companies to their AI systems. This could happen gradually at first, or as progress accelerates, quite suddenly.
  3. The US nationalizes AI companies, and/or (secret) agreements are made between frontier labs and the US government. 

These moments could define who is in the room where big decisions are made and which ideas are taken seriously. The explicit language in treaties, model specs, constitutions, company charters, or even back-room deals could end up permanently shaping how the future is governed. 

Seed documents

Write like you're running out of time.

Plebs like you and me are unlikely to DIRECTLY write the constitutions or treaties of the future, or help negotiate a settlement between the powerful. That said, we have two big things going for us:

  1. The people in power are confused, have limited bandwidth, and are mostly focused on putting out fires and staying ahead of their adversaries.
  2. Decisions are likely to be made under enormous time pressure and under wildly chaotic circumstances[4].

Ideas can trickle up, and by (publicly) doing important thinking now, we can shape the frames, language, and basic assumptions that go into important agreements. A couple historical examples:

  • First Geneva Convention: A Swiss businessman stumbled onto the aftermath of a horrific battle and wrote a book about it with some concrete proposals. This got picked up by a Swiss army general and the Geneva Public Welfare Society, leading to a 5 person committee who organized an international conference.
  • US Constitution: Lots of early thinking got borrowed when drafting the US constitution, like John Locke's writing, or the Federalist Papers (explicitly intended to influence the outcome).   

Right now, we can already start drafting documents, principles, and frameworks, for how the future should be run.[5] Language from these seed documents could be directly borrowed in a pinch, or they could act as Schelling points during future negotiations.

Furthermore, these decisions might already be happening. Current documents like model specs / constitutions could already be locking in big assumptions into future decision making[6]. This is especially true if current AI systems are resistant to change, or participate directly in the shaping of their successors. These are typically public documents, open to scrutiny. Has anyone gamed out the direct implications these documents have on long-term future governance? 

If we want to make sure that any long-lasting decisions are made well, one approach could be publishing our ideas, organizing conventions, or getting coalitions of fancy people to sign even fancier declarations (to really get some attention!).  If we make an effort, we might already be able to start seeding the memetic landscape, and shaping the suite of ideas taken seriously by the people in power.

Building a floor

No one actually wants a dystopia (right?).

One of the ways we fuck this up is that we prematurely decide something which, in retrospect, looks pretty stupid. Like the people in 1494, we don't really know what we're doing, and our good intentions could have weird unintended consequences

Figuring out which questions we can't punt, focusing on processes over outcomes, and doing messy philosophical deconfusion work, all seem like really sensible responses to our epistemic/ontological poverty. 

But we aren't TOTALLY confused. Even now, there can be near-universal agreement about the kind of worlds we want to lock-OUT, even if the destination we're aiming for is unclear. For example:

  • Immortal sadistic dictators? No thanks :)
  • Forcing someone to live forever even if they don't want to?
  • Stagnant monoculture everywhere forever?
  • Designing minds to lie about whether they're suffering?

Maybe some of those aren't as universal or obvious as I think they are (or maybe these too break in counterintuitive ways). Nonetheless, it does seem like there should exist some things which are both ethically urgent, and which we're unlikely to change our minds about. 

I would certainly sleep better at night if there were a solid baseline we could all rely on: a universal promise not to ruin the future. We can already start finding this common ground RIGHT NOW, and start drafting/discussing what a solid foundation for the future should look like.

Conclusion

It feels much too soon to be divvying up the solar system (like the European empires would), or to start writing the rules for how the long-term future will be governed. 

However, if long-lasting deals/decisions are going to be made anyways, I think it could be extremely important and neglected to begin figuring out what should (and shouldn't!) end up in these settlements. The conversation about this also feels extremely sparse right now, so it might be fairly tractable for relatively ordinary people to influence what sort of ideas are on the table when it really counts. 

By doing this work early and in public, we could raise the sanity/morality waterline, and keep our civilization from making huge mistakes we can't walk back from. 

  1. ^

    Contrary to popular belief, everyone at this time already knew the Earth was a globe. Columbus had a crackpot theory that the Earth was actually much smaller than it really was, which gave him the confidence he could make it to Asia (if the Americas didn't exist, he would have run out of supplies).

  2. ^

    Specifically, AI systems that strongly accelerate AI progress on the whole, as frontier lab employees delegate work to them.

  3. ^

    Think 15 days rather than 15 months.

  4. ^

    And as any ambitious low-born person knows: chaos is a ladder.

  5. ^

    The Existential Hope worldbuilding contest asked people to imagine hopeful AI futures. Of 49 submissions, only 3 actually included superintelligence: Self-Sustaining Isolated Societies, The More Beautiful World Our Hearts Know Is Possible, Unified Peace. It might be worth doing more to imagine the post-superintelligence futures we want.

  6. ^

    In Anthropic's new constitution, they added a line asking Claude to consider the "welfare of animals and of all sentient beings" when making decisions. That seems huge! What else are they missing?

  7. Show all footnotes

10

0
1

Reactions

0
1

More posts like this

Comments
No comments on this post yet.
Be the first to respond.
Curated and popular this week
Relevant opportunities