Hide table of contents

[Epistemic status: Shallow dive into research questions, backed by some years of on-and-off thinking about this kind of plan.]

Introduction

There is some chance that civilization will cease to function before we hit an intelligence explosion. If it does, it would be good to preserve existing alignment research for future generations who might rebuild advanced technology, and ideally have safe havens ready for current and future researchers to spend their lives adding to that pool of knowledge.

This might delay capabilities research by many decades, centuries, or longer while allowing basic theoretical alignment research to continue, and so be a potential Yudkowskian positive model violation for which we should prepare.

Setting this infrastructure up is a massively scalable intervention, and one that should likely be tackled by people who are not already on the researcher career path. It would have been good to get started some years ago given recent events, but now is the second best time to plant a tree.[1]

Preserving alignment knowledge through a global catastrophe

What data do we want to store?

Thankfully, the EleutherAI people are working on a dataset of all alignment research[2]. It's still a WIP[3] and contributions to the scripts to collect it are welcome, so if you're a programmer looking for a shovel ready way to help with this then consider submitting a PR[4].

How do we want to store it?

My shallow dive into this uncovered these options:

  • We could print it out on paper
    • Lifetime: 500+ years in good conditions (might depend significantly on paper and ink quality, more research needed). Vacuum sealing it with low humidity seems like it would help significantly.
    • Pros: Totally human readable.
  • Microsoft's Project Silica is the longest lasting option I could find
    • Lifetime: 10000+ years
    • Cons: Would require high levels of technology to read it back. I'm not seeing an option to buy the machines required to write new archives and expect them to be very advanced/expensive, so this would be limited to storing pre-collapse research.
  • CDs could be a minimalist option
    • Lifetime: Maybe 50 years  if stored in good conditions
    • Pros: Good ability for researchers to explore the information on computers while those last)
    • Cons: It's very plausible that a severe GCR[5] would set us back far enough that we'd not regain CD reading technology before they decayed so they aren't a full solution.
  • The Arctic World Archive seems worth including in the portfolio
    • Lifetime: 1000+ years
    • Pros: It's a pretty straightforward case of turning money into archives
    • Cons: Not very accessible in the meantime
  • The DOTS system (a highly stable tape-based storage medium) might be a strong candidate, if it is buyable.[6]
    • Lifetime: 200-2000+ years
    • Pros: Human readable or digital archives, possibly usable for some time after collapse.

Each has advantages, so some combination of them might be ideal.

Where do we store it?

Having many redundant backups seem advisable, preferably protected by communities which can last centuries or in locations which will not be disturbed for a very long time. Producing "alignment backup kits" to send out and offering microgrants to people all around the world to place them in secure locations would achieve this. We'd likely want basic (just pre-collapse work) and advanced (capable of adding archives for a long time post-collapse) options.

If you'd like to take on the challenge of preparing these kits, storing an archive, or coordinating things, please join the Alignment After A GCR Discord (AAAG). I'm happy to collaborate and give some seed funding. If you want to help collect and improve the archive files, #accelerating-alignment on EAI is the place to go.

Continuing alignment research after a global catastrophe

It is obviously best if as many people survive the GCR as possible, and supporting the work of organizations like the Alliance to Feed the Earth in Disasters seems extremely valuable. However, a targeted intervention to focus on allowing alignment researchers to continue their work in the wake of a disaster might be an especially cost-effective way to improve the long-term future of humanity.

Evacuation plans

A list of which researchers to prioritize would need to be drawn up.[7] They would need instructions on how to get to the haven, ideally someone with reliable transport to take them there. In case of moments of extreme risk, they would be encouraged to preemptively (and hopefully temporarily) move to the haven.

Designing havens

The locations would need to be be bought, funded, and partially populated before the GCR.[8] I have some ideas about which other subcultures might be good to draw from, with the Authentic Relating community top of the list.[9]

The havens would need to be well-stocked to weather the initial crisis and recover after. They should be located in places where farming or fishing could produce a surplus in the long term to allow some of the people living there to spend much of their time making research progress. Being relatively far from centers of population seems beneficial, but close enough to major hubs that transport is practical. There are many considerations, and talking to ALLFED to get their models of how to survive GCRs seems like an obvious first step to plan this.

Avoiding the failure mode of allowing so many people to join that the whole group goes under would be both challenging and necessary. Clear rules would have to be agreed on for who could join.

The culture would need to be set up to be conducive to supporting research in the long term while being mostly self-sufficient, this would be an interesting challenge in designing community. People with the skills to produce food and other necessities would need to be part of the team.

Call to action

Even more than archiving, this needs some people to make it their primary project in order for it to happen. That could include you! I would be happy to provide advice, mentorship, connections, and some seed funding to a founder or team who wants to take this project on.[10] Message me here or @A_donor on the Discord.

This project could also benefit from volunteers for various roles. If you or someone you know would like to help by

  • Searching for locations
  • Potentially moving to a haven early and helping set up
  • Researching questions
  • Putting us in contact with people who might make this work (e.g. people with experience in self-sufficient community building)
  • Doing other tasks to increase the chances that we recover from GCRs with a strong base of alignment theory

Please join the Discord and introduce yourself, specifically indicating that you'd like to help with havens so I know to add you to those channels.

I can fund the very early stages of both projects, but in order to scale it to something really valuable we would need major funders on board. If you are or have access to a major funder and want to offer advice or encouragement to apply that would increase the chances that this goes somewhere.

It's quite likely that I won't post public updates about the havens part of this project even if it's going relatively well, as having lots of attention on it seems net-negative, so don't be surprised if you don't hear anything more.

  1. ^

    "The best time to plant a tree is twenty years ago. The second best time is now." - Quote

  2. ^

    They want to use it to train language models to help with alignment research, but it aims to contain exactly what we'd want.

  3. ^

    Work In Progress

  1. ^

    Pull Request - A way of suggesting changes to a repository using version control, usually used in programming.

  2. ^

    Global Catastrophic Risk - An event which causes massive global disruption, such as a severe pandemic or nuclear war.

  3. ^

    The website is unclear on whether it's immediately available.

  4. ^

    If you're a researcher and want to be on the list, feel free to contact me with your location and I'll keep track of everyone's requests. We might possibly use Alignment EigenKarma as an unbiased metric to prioritize if that exists in time.

  5. ^

    Unless anyone knows of good places which might be joinable already, if you do please message me!

  6. ^

    They are compatible with Rationalist/EA culture, more likely than most to be able to create stable communities, and some of them like the idea of building strong community for the benefit of all of humanity.

  7. ^

    I have a reasonably strong track record as a Mentor/Manager/Mysterious Old Wizard/Funder package deal. If you're enthusiastic and bright don't worry if the task seems overwhelming, I can help you pick up the skills and decompose tasks.

Show all footnotes

40

0
0

Reactions

0
0

More posts like this

Comments11


Sorted by Click to highlight new comments since:

This is a good idea. And I think it can be extrapolated to preserving relevant records of our civilization / species so eventual successors won't have so much trouble thinking about Fermi paradox and great filters. Idk, maybe a vault on the moon? Mars? (I submitted an entry to the FTX contest on that, btw. Now I realize how this idea first came to my mind: Cixin Liu's Death's End)

[As posted in the Discord]. An MVP of this might be making offline copies of the AI Alignment Forum, EA Forum and LessWrong available using an app like Kiwix, and encouraging EAs to download them. Bonus if they are automatically updated every month or so. Next step for resilience would be burying old phones with copies of the content on them.

As an avid user of Kiwix, I'd be very interested in any of those.

Agreed that civilization restart manuals would be good, would be happy to have the alignment archives stored alongside those. Would prefer not to hold up getting a MVP of this much smaller and easier archive in place waiting for that to come together though.

Maybe also worth considering stone tablets?

My guess is these are great for longevity,  but maybe prohibitively expensive[1] if you want to print out e.g. the entire alignment forum plus other papers. 

Could be good for a smaller selected key insights collection, if that exists somewhere?

  1. ^

    Likely reference class is gravestones. I'm getting numbers like:  "Extra characters are approximately $10 thereafter" and "It costs around £1.95 per letter or character", even with a bulk discount that's going to add up.

Now I'm imagining a friendly AGI etching the Textbook From The Future on stone tablets. But it would be an interesting exercise to try and condense the key insights made to date into 1k or 10k characters.

The purpose of preserving alignment is not to get back to AI as quickly as possible, but to make it more likely that when we eventually do climb the tech tree we are more likely to be able to align advanced AIs. Even if we have to reinvent a large number of technologies, having alignment research ready represents a (slightly non-standard) form of differential technological development rather than simply speeding up the recovery overall.

Can someone port it to Kiwix (or similar, for offline reading on a phone)? (I'm happy to fund this)

Curated and popular this week
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while