Hide table of contents

About the EA Archive

The EA Archive is a project to preserve resources related to effective altruism in case of a sub-existential catastrophe such as nuclear war. 

Its more specific, downstream motivating aim is to increase the likelihood that a movement akin to EA (i.e., one that may go by a different name and be essentially discontinuous with the current movement, but share the broad goal of using evidence and reason to do good) survives, reemerges, and/or flourishes without having to re-invent the wheel, so to speak.

It is a work in progress, and some of the subfolders at the referenced Google Drive are already slightly out of date.

Theory of Change

The theory of change is simple, if not very cheerful to describe: if copies of this information exist in many places around the world, on devices owned by many different people, it is more likely that at least one copy will remain accessible after, say, a war that kills most of the world's population.

Structure

As shown in the screenshot, there are three folders. The smallest one, "Main content," contains html, pdf, and other static, text-based files. It is by far the most important to download. 

If for whatever reason space isn't an issue and you'd like to download the larger folders to, that would be great too.

I will post a shortform quick take (at least) when there's been a major enough revision to warrant me asking for people to download a new version.

How you can help

1) Download!

Click here to download the core content (.zip, 2GB)
Browse and download the complete archive (up to 51GB uncompressed)

This project depends on people like you downloading and storing the Archive on a computer or flash drive that you personally have physical access to, especially if you live in any of the following:

  1. Southeast Asia + Pacific (esp. New Zealand)
  2. South and Central Africa
  3. Northern Europe (esp. Iceland)
  4. Latin America, Mexico City and south (esp. Ecuador, Colombia, and Argentina)
  5. Any very rural area, anywhere

If you live in one of the shaded areas (green or blue), I would love to buy you a flash drive to make this less annoying and/or enable you to store copies in multiple locations, so please get in touch via the Google Form, DM, or any other method.

2) Suggest/submit, and provide feedback

Currently, the limiting factor on the Archive's contents is my ability and willingness to find identify relevant resources and then scrape or download them (i.e., not the cost or feasibility of storage). If you notice something ought to be in there that isn't, please use this Google Form to do any of the following...

  1. Let me know what it is, broadly (good) 
  2. Send me a list of urls containing the info (better)
  3. Send me a Google Drive link with the files you'd like added (best)
  4. Provide any general feedback or suggestions

I may have to be somewhat judicious about large video and audio files, but virtually any relevant and appropriate pdf/text/web/spreadsheet content should be fine.[1]

3) Share

Send this post to friends, especially other EAs who do not regularly use the Forum! 

FAQ

How sure are you that this is at all necessary/helpful?

Not super sure! All things considered I think there's like a 70% chance I'd have done at least this much if I had done a lot more research.

Wasn't there a tweet?

Yeah, from an embarrassingly long time ago.

Wasn't there a website?

Yeah, but it proved more trouble than it was worth.

Contact info

  1. ^

    Fun fact: the plain text of all EA Forum posts takes up just about ~100MB (as of a few weeks ago), equivalent to about 2 hours of decent quality audio.

70

0
0

Reactions

0
0

More posts like this

Comments18


Sorted by Click to highlight new comments since:
Gil
20
8
2

This might be a dumb question, but shouldn't we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.

I’ve definitely thought about this and short answer: depends on who “we” is.

A sort of made up particular case I was imagining is “New Zealand is fine, everywhere else totally destroyed” because I think it targets the general class of situation most in need of action (I can justify this on its own terms but I’ll leave it for now)

In that world, there’s a lot of information that doesn't get lost: everything stored in the laptops and servers/datacenters of New Zealand (although one big caveat and the reason I abandoned the website is that I lost confidence that info physically encoded in eg a cloud server in NZ would be de facto accessible without a lot of the internet’s infrastructure physically located elsewhere), everything in all its university libraries, etc.

That is a gigantic amount of info, and seems to pretty clearly satisfy the “general info to rebuild society” thing. FWIW I think this holds if only a medium size city were to remain intact, not certain if it’s say a single town in Northern Canada, probably not a tiny fishing village, but in the latter case it’s hard to know what a tractable intervention would be.

But what does get lost? Anything niche enough not to be downloaded on a random NZers computer or in a physical book in a library. Not everything I put in the archive, to be sure, but probably most of it.

Also, 21GB of the type of info I think you’re getting at is in the “non EA info for the post apocalypse folder” because why not! :)

[anonymous]7
1
0

That was my first thought, but I expect many other individuals/institutions have already made large efforts to preserve such info, whereas this is probably the only effort to preserve core EA ideas (at least in one place)? And it looks like the third folder - "Non-EA stuff for the post-apocalypse" - contains at least some of the elementary resources you have in mind here.

But yeah, I'm much more keen to preserve arguments for radical empathy, scout mindset, moral uncertainty etc. than, say, a write-up of the research behind HLI's charity recommendations. Maybe it would also be good to have an even small folder within "Main content (3GB)" with just the core ideas; the "EA Handbook" (39MB) sub-folder could perhaps serve such a purpose in the meantime.

Anyway, cool project! I've downloaded :)

Yeah i guess that makes sense. But uh.... have other institutions actually made large efforts to preserve such info? Which institutions? Which info?

[anonymous]9
0
0

Huh, maybe not.

Might be worth buying a physical copy of The Knowledge too (I just have).

And if anyone's looking for a big project...

If we take catastrophic risks seriously and want humanity to recover from a devastating shock as far and fast as possible, producing such a guide before it’s too late might be one of the higher-impact projects someone could take on.

Another easy thing you can do, which I did several years ago, is download Kiwix onto your phone, which allows you to save offline versions of references such as Wikipedia, WikiHow, and way, way more. Then also buy a solar-powered or hand-crank USB charger (often built into disaster radios such as this one which I purchased).

For extra credit, store this data on an old phone you no longer use, and keep that and the disaster radio in a Faraday bag.

[anonymous]3
0
0

All done :-) (already had a solar/crank charger+radio). Thank you!

Can we set up a torrent link for this?

I have only a vague idea what this means but yeah, whatever facilitates access/storage. Is there anything I should do?

I can look into how to set up a torrent link tomorrow and let you know how it goes!

Sorry, I never got around to this. If someone wants to take this up, feel free!

https://www.lesswrong.com/posts/bkfgTSHhm3mqxgTmw/loudly-give-up-don-t-quietly-fade

Interesting idea; what is the thought process behind the map?

It’s actually been a little while since I made it, but places most likely to both (1) not be direct targets of a nuclear attack and (2) be uncorrelated with the fates of major datacenters plausibly holding the information currently

Note also that the Internet Archive (Wayback Machine) is working on an offline archive (which, if I understand correctly, is intended to be installable as a local server to have a copy of some parts of the web which you could "load" into a browser and navigate pages ordinarily). 

I think it'd be cool to have a collection for effective altruism-related resources, which then maybe would picked up by some people saving offline storages.

What's the likelihood that even in 'incredible' places there would be electricity? For some reason I always assumed there would basically be no electricity during a major global catastrophe, which is possibly incorrect. But does it make sense to have paper copies too? What's the trade-off here?

Even given no electricity, copies stored physically in e.g. a flash drive or hard drive would persist until electricity could be supplied, I'm almost certain

Curated and popular this week
Paul Present
 ·  · 28m read
 · 
Note: I am not a malaria expert. This is my best-faith attempt at answering a question that was bothering me, but this field is a large and complex field, and I’ve almost certainly misunderstood something somewhere along the way. Summary While the world made incredible progress in reducing malaria cases from 2000 to 2015, the past 10 years have seen malaria cases stop declining and start rising. I investigated potential reasons behind this increase through reading the existing literature and looking at publicly available data, and I identified three key factors explaining the rise: 1. Population Growth: Africa's population has increased by approximately 75% since 2000. This alone explains most of the increase in absolute case numbers, while cases per capita have remained relatively flat since 2015. 2. Stagnant Funding: After rapid growth starting in 2000, funding for malaria prevention plateaued around 2010. 3. Insecticide Resistance: Mosquitoes have become increasingly resistant to the insecticides used in bednets over the past 20 years. This has made older models of bednets less effective, although they still have some effect. Newer models of bednets developed in response to insecticide resistance are more effective but still not widely deployed.  I very crudely estimate that without any of these factors, there would be 55% fewer malaria cases in the world than what we see today. I think all three of these factors are roughly equally important in explaining the difference.  Alternative explanations like removal of PFAS, climate change, or invasive mosquito species don't appear to be major contributors.  Overall this investigation made me more convinced that bednets are an effective global health intervention.  Introduction In 2015, malaria rates were down, and EAs were celebrating. Giving What We Can posted this incredible gif showing the decrease in malaria cases across Africa since 2000: Giving What We Can said that > The reduction in malaria has be
Ronen Bar
 ·  · 10m read
 · 
"Part one of our challenge is to solve the technical alignment problem, and that’s what everybody focuses on, but part two is: to whose values do you align the system once you’re capable of doing that, and that may turn out to be an even harder problem", Sam Altman, OpenAI CEO (Link).  In this post, I argue that: 1. "To whose values do you align the system" is a critically neglected space I termed “Moral Alignment.” Only a few organizations work for non-humans in this field, with a total budget of 4-5 million USD (not accounting for academic work). The scale of this space couldn’t be any bigger - the intersection between the most revolutionary technology ever and all sentient beings. While tractability remains uncertain, there is some promising positive evidence (See “The Tractability Open Question” section). 2. Given the first point, our movement must attract more resources, talent, and funding to address it. The goal is to value align AI with caring about all sentient beings: humans, animals, and potential future digital minds. In other words, I argue we should invest much more in promoting a sentient-centric AI. The problem What is Moral Alignment? AI alignment focuses on ensuring AI systems act according to human intentions, emphasizing controllability and corrigibility (adaptability to changing human preferences). However, traditional alignment often ignores the ethical implications for all sentient beings. Moral Alignment, as part of the broader AI alignment and AI safety spaces, is a field focused on the values we aim to instill in AI. I argue that our goal should be to ensure AI is a positive force for all sentient beings. Currently, as far as I know, no overarching organization, terms, or community unifies Moral Alignment (MA) as a field with a clear umbrella identity. While specific groups focus individually on animals, humans, or digital minds, such as AI for Animals, which does excellent community-building work around AI and animal welfare while
Max Taylor
 ·  · 9m read
 · 
Many thanks to Constance Li, Rachel Mason, Ronen Bar, Sam Tucker-Davis, and Yip Fai Tse for providing valuable feedback. This post does not necessarily reflect the views of my employer. Artificial General Intelligence (basically, ‘AI that is as good as, or better than, humans at most intellectual tasks’) seems increasingly likely to be developed in the next 5-10 years. As others have written, this has major implications for EA priorities, including animal advocacy, but it’s hard to know how this should shape our strategy. This post sets out a few starting points and I’m really interested in hearing others’ ideas, even if they’re very uncertain and half-baked. Is AGI coming in the next 5-10 years? This is very well covered elsewhere but basically it looks increasingly likely, e.g.: * The Metaculus and Manifold forecasting platforms predict we’ll see AGI in 2030 and 2031, respectively. * The heads of Anthropic and OpenAI think we’ll see it by 2027 and 2035, respectively. * A 2024 survey of AI researchers put a 50% chance of AGI by 2047, but this is 13 years earlier than predicted in the 2023 version of the survey. * These predictions seem feasible given the explosive rate of change we’ve been seeing in computing power available to models, algorithmic efficiencies, and actual model performance (e.g., look at how far Large Language Models and AI image generators have come just in the last three years). * Based on this, organisations (both new ones, like Forethought, and existing ones, like 80,000 Hours) are taking the prospect of near-term AGI increasingly seriously. What could AGI mean for animals? AGI’s implications for animals depend heavily on who controls the AGI models. For example: * AGI might be controlled by a handful of AI companies and/or governments, either in alliance or in competition. * For example, maybe two government-owned companies separately develop AGI then restrict others from developing it. * These actors’ use of AGI might be dr