Permanent Societal Improvements



Daniel Kokotajlo, Diego Caleiro, Ramana Kumar and I recently discussed the idea of Permanent societal improvement - non-Xrisk related ways of affecting, and hopefully improving, the far future. These are actions we can take now that will have some multiplicative effect on the value of the future of humanity. This post is intended as the beginning of a conversation, not the end of a research project, and we eagerly await feedback and more ideas. Also please bear in mind that not everyone agreed with all the ideas, and any mistakes remain my own.


A toy model:

Suppose there is a 5% chance humanity will be destroyed in 2100, and if we survive that great filter then we will go on to colonise the light cone. Assuming this is a ‘good’ colonisation, full of happy, enlightened, virtuous people, it seems that reducing Existential Risk by 5% to 0% would roughly increase the Expected Value of the future by 5.3%. We could compare this to an action that would make the colonised universe 10% better - this would then increase the EV of the future by roughly 10%. So improving the future, in this toy example, could be dramatically better than reducing Xrisk.


What types of things could be Permanent Societal Improvements?


A major restriction is that they have to be things that would not otherwise be done later. If I invent something that would otherwise have been invented 20 years later, I have only improved the world by ( 20 years x impact of invention ) , not ( lifespan of humanity x impact of invention ).* This is quite a strong restriction on what could count as such permanent improvements.


Here are a few broad categories we came up with.


Influencing Lock-in

  • It is possible that humanity will end up ‘locked in’ to a certain political state. This could be a Singleton - an agent with complete control, either AI or totalitarian state or a stable multipolar society, maybe due to EM competition.

  • If this is the case, then affecting which political state humanity gets locked into would have a permanent effect on the future.

  • Alternatively, we might affect what the future cared about. Under some types of Singleton, virtually any ethical debate could become a very pressing issue: we want to make sure the right side is preserved and propagated. Maybe it is important to persuade people that animals are morally valuable now so that an AI will care about them (though perhaps CEV obviates the need for this.) Or maybe we need to make sure the future Hegemon cares enough about art so the lightcone isn’t deprived of it.

  • Value Lock-in could happen even without a Singleton, given some new technologies or memes. For example, if we invent memetic or technological ways to reinforce existing values, (like brainwashing but more effective) then existing value systems could become significantly more entrenched.


Compounding resource constraints

  • If there is some resource which is going to be a constraint on the growth of moral value, we could affect the future by investing now to reduce the constraint. This is especially true if the resource exhibits compound growth.

  • For example, if total population were to grow at 1% a year indefinitely, by having some extra children now (say 1%) we could permanently increase the future population in a multiplicative way. If you subscribe to an aggregative theory of ethics, this could make the future 1% more valuable.

  • If the future will be bounded by the speed of light, launching ships now could help alleviate the volume constraint.
  • Or maybe we could invest in server capacity in readiness of a EM future.


Moral Progress / Decay

  • If humanity makes moral progress, we might improve the future by accelerating this process.

  • Alternatively, if humanity suffers from value drift, we could improve the future by retarding this decay.

  • The benefit of accelerating moral progress is much less if there is an ideal ethics we are converging towards, as then we only get the benefit from accelerating near-perfect ethics.

  • Conversely, if our values are drifting in such a way that most of the future will be of no value, perhaps due to value fragility, then delaying the decay could dramatically improve the future. If we suffer 1% drift a year, then standing athwart history for a year would improve the total value of the future by 1%.


Original Sin

  • Some people think that the British Empire was permanently ‘tainted’, in some way, by its early endorsement of slavery, and that this moral taint persisted even after it had abolished slavery in most of the world. If this is true, it could be valuable to ensure the future isn’t founded in some way that permanently taints it.

  • Conversely, maybe having some people from the 20th and 21st centuries could be a long-lasting source of pride and joy for future generations, so some cryonics could be considered a permanent improvement. Similar things could be said about historical artifacts, beautiful natural landmarks, etc.


Coordination problems

  • If humanity colonises the stars we may end up being fractured by distance, different colonies unable to communicate. Perhaps when the first ships are sent a lasting convention could be established that all colonies should send updates about their history back to earth. This history could be an object of great value, but establishing this convention might be something that could only be done very early on in the colonisation process.

  • Alternatively, we could establish property right norms to divide up the lightcone and prevent conflict. By establishing a norm now that colonised could claim whatever they wanted if they traveled directly away from earth, but could not ‘cross-colonise’ into other sectors, we could prevent future wars. This norm would be much harder to establish once the earth was no longer the clear schelling point for the origin, and once it was clear who the ex-post winners and losers from this policy would be.

  • Establishing norms that will protect biological humans and EMs from Hansonian competition - like a right to retire.

  • If uploads are not conscious, it might be important to agree on this before EMs massively outnumber biological humans; after that point it would become much harder.


* ignoring whatever else the future would-be inventor would otherwise do with their resources.