This article expresses a concern about how, despite its appeals, the Long Reflection could go quite badly, because the restrictions on physical progress could undermine our rationality and capability for moral progress. There has been a variety of pieces discussing the Long Reflection (see e.g. those linked here); it is possible that a steelman version of what the authors really meant (or should have meant) would not be subject to this critique. Please consider this a traditional red-team exercise: thinking about what could go wrong if we attempted to implement the plan, including plausible ways it might be mis-implemented, contrary to the intentions of the original author. 

Unless otherwise noted all quotes are from Will’s book What We Owe the Future, p98-102.

What is the Long Reflection?

The long reflection is a plan for an extended period of time, after we have successfully pushed existential risk down to very low levels, during which mankind will avoid making any irreversible decisions and instead will try to figure out what we should be doing before then moving on to execute on this plan.

“I call it the long reflection, which is you get to a state where existential risks or extinction risks have been reduced to basically zero. It’s also a position of far greater technological power than we have now, such that we have basically vast intelligence compared to what we have now, amazing empirical understanding of the world, and secondly tens of thousands of years to not really do anything with respect to moving to the stars or really trying to actually build civilization in one particular way, but instead just to engage in this research project of what actually is a value. What actually is the meaning of life? And have, maybe it’s 10 billion people, debating and working on these issues for 10,000 years because the importance is just so great. Humanity, or post-humanity, may be around for billions of years. In which case spending a mere 10,000 is actually absolutely nothing.” (source

I think this ‘figure out’ falls into basically two categories. Partially it will involve thinking, arguing, debating and so on, in the traditional Enlightenment/Academic/Rational style, as we attempt to marshal more evidence and arguments, and evaluate them correctly. This seems like a prima facie reasonable approach to abstract philosophical issues like Wei Dai’s questions. As a philosophy major I appreciate the idea on an instinctive level!

“[A] stable state of the world in which we are safe from calamity and we can reflect on and debate the nature of the good life working out what the most flourishing society would be.”

There will also be less abstract elements, including social experimentation between different groups and immigration to determine how people best like living according to their revealed preferences rather than mere cheap talk.

“...increasing cultural cultural and intellectual diversity if possible… we would want to structure things such that, globally, cultural evolution guides us towards morally better views and societies … Fairly free migration would also be helpful. If people emigrate from one society to another, that gives us at least some evidence that the latter society is better for those who migrated there.”

However, some forms of experimentation and contest will not be favoured. In particular, fecundity, economic growth and military prowess seem not to be valued. Space colonisation is vetoed; man must remain in his cradle until he has reached moral maturity:

“That one society has greater fertility than another or exhibits faster economic growth does not imply that that society is morally superior ...

“It would therefore be worth spending many centuries … before spreading across the stars”

It is not entirely clear that economic growth is strictly prohibited during the Long Reflection, though the prohibition on interstellar colonisation presumably impose some ultimately binding resource constraints. However, I think fairly harsh anti-growth attitudes are a natural interpretation of the Long Reflection, and one that might be chosen by future generations absent pushback, so that scenario is what I am red-teaming.

Truth-seeking requires Grounding in Reality

My concern is that these constraints will significantly separate our deliberations from reality. Historically, progress has often been driven by necessity. Primitive tribes can only become so self-destructive until they lose the ability to hunt effectively. Later, the pressures of war rewarded groups that could understand the world, selecting against those who turned inwards. In peacetime, commerce rewards firms and individuals who understand how the world works, and how best to satisfy people’s desires with the resources available to us. Artists often produce better work when solving for some constraints, rather than being given a totally free remit and blank canvas of arbitrary pixel manipulation. The world provides information, it incentivises rationally using that information, and selects against those that do not.

Removing these constraints seems like it could have significantly negative effects on our ability to seek the truth. Sakoku-era Japan may have shut out the world, but it did not to my knowledge take advantage of this to produce any great advance.

The Long Reflection’s primary concern is with the discovery of moral truths, so you could argue that these sorts of processes are not helpful. Competition and challenge result in striving for physical mastery, but not moral truths, because of the is-ought gap. Perhaps the orthogonality thesis means we can achieve arbitrary moral progress in an arbitrarily backward physical environment.

A response open to some metaethics is that at least some moral truths are closely entangled with empirical facts - that things like the value of autonomy, or love, are inexplicably related to what we know about the consequences of giving people freedom to make choices or partaking in loving relationships.

More compelling, I think, is the response that reasoning about the physical world trains and rewards beneficial habits of careful thought - openmindedness to new arguments, ability to follow chains of logic, and so on - that can then be usefully deployed in moral philosophy. Even if the engineers aren’t the ones doing philosophy, engineering raises the status of logical thinking and ensures the required background is readily available.

One way of thinking about ideas is Dawkin’s meme theory, according to which ideas, via their human hosts, undergo reproduction and mutation, and hence natural selection for memetically ‘fitter’ ideas. Memetic fitness can have many components; for example, memes that make their adherents more likely to survive and procreate will be fitter, all else equal, assuming they are at least somewhat hereditary. But this effect would be significantly reduced in the Long Reflection, with little (no?) war, population caps, and no space colonisation. Deliberate selection by humans attempting to rationally choose memes (the objective of the Long Reflection) would remain, and is a positive. But irrational components of memetic spread would also remain - memes that were better at ‘hacking’ human psychology and sociology to spread themselves in a viral way. These methods include being fun to believe, signalling some desirable property, coordinating demands for resources for fellow believers, and stigmatising non-believers. My concern is that, absent the clear eye and sharpened power of war, or the animating contest of freedom and progress, these arational drivers of memetic spread will dominate over the rational.

A telling example of this, I think, is the rise of largely non-truth-orientated ‘woke’ memes in recent years. These seem to have become much more entrenched in universities than in private business. Even though the former is ostensibly dedicated to the impartial pursuit of truth, the difficulty in objectively determining success - and the lack of the inherent negative feedback of P&L - has left the academy much more vulnerable. In business, though not perfect, most investors and employers care dramatically more about the productivity of their employees than their beliefs, and few firms are willing to pay significantly higher prices to patronise right-thinking suppliers. In contrast, in academia the opinion of your peers is all important, and they will suffer few if any negative consequences for bias against you in hiring, publication, promotion or dismissal.

Additionally, I think it is a lot more difficult than many people think to just hit ‘pause’ on economic growth. Growth allows for the majority of people to experience progression and advancement over their lives; in its absence society will become essentially zero sum. In societies where collaboration for mutual profit is impossible, and output is divorced from reward, efforts are reallocated away from socially beneficial production into political manoeuvring for more resources. Moral appeals are often a valuable tool in such conflicts, but this form of competition does not promote honest truth-seeking moral reflection: it selects for moral propaganda, whose conclusions - that the author is morally deserving - was written in advance.

There are some protections against this in the Great Reflection. Some of the dark arts will be prohibited as unconductive to truth seeking.

"It seems that techniques for duping people - lying, bullshitting, and brainwashing - should be discouraged, and should be especially off limits for people in positions of power, such as those in political office."

Here I think a great deal depends on how exhaustive this index artium prohibitorum was intended to be. When I think about the issues degrading afflicting universities, explicit lying seems less of an issue than softer issues like p-hacking and social desirability bias. An insidious memeplex doesn’t need to have its adherents explicitly lie if it can make people mentally flinch before they pursue some thought or research that they know might result in social sanction, or keep re-running their analysis until they get the results they know are correct. When we think about the epistemic standards we expect in EA or on LW, merely not actively lying is a very low bar - beyond this, we expect people to exhibit a scout mindset, to use the principle of charity, to welcome dissent, to undergo pre-mortems and red-teaming, and if the entirety of humanity is going to be dedicated to philosophical inquiry for hundreds of years I would expect at least as high a standards.

And if moral progress turns out to be impossible? I think I’d prefer we at least made physical progress, while the anchor of competition and survival prevents too much arbitrary value drift over time (though perhaps not!).

Does immigration address this?

Immigration, and especially emigration, could potentially provide such a check during the Long Reflection, albeit constrained by the high costs involved in moving. Historically people’s ability to flee to the hills has been a constraint on the ability of empires to tyrannise their population; it is no surprise that the communist regimes have had to keep their people in at gunpoint. It’s definitely correct that people’s decisions in where they move are a highly credible signal for which societies seem better to them.

However, I think this is unsatisfactory, for two reasons.

Firstly, it does not deal with parasitism. If one society is very effective at begging or extorting resources from others, it could appear to be a quite pleasant place to live. One example of this would be the issue of anti-natalism. If you have two societies, one which has a high birth rate whose people then emigrate to the very-slightly higher happiness other society, which does not produce children, a net-immigration-flow metric will judge the latter to be better, even though it could not exist without the former. 

Secondly, it not entirely clear how much immigration there will be during the Long Reflection, because we also have the constraint that no one country (or, presumably, closely allied group of countries) will be allowed to become too powerful. Since immigration is a method for increasing manpower and hence military power, under the Long Reflection apparently we may need to prevent any one country from getting too many people:

“At the same time, we would want to prevent any one culture from becoming so powerful that it could conquer all other cultures through economic or military domination. Potentially, the could require international norms or laws preventing any single country from becoming too populous, just as antitrust regulations prevent any single company from dominating a market and exerting monopoly power.”

The US, especially in concert with close allies like the UK and Canada, is already extremely powerful, and it seems not entirely implausible to me that they could conquer the entire world today, if there was the will. So if we were to enter the Long Reflection tomorrow, it seems quite possible that the US might have to impose an immediate moratorium on further immigration, and perhaps pursue deportations.

If this is the case however, immigration ceases to act as a ‘reality check’ that shows us people’s revealed preferences for where to live, because no matter how much more desirable the US (+ similar countries) is than elsewhere, no-one will be immigrating.

There many other perverse consequences of maximum population rules - e.g. the potential for a ‘bank run’ where everyone races to immigrate as fast as possible if they think the cap will be reached, and the odious question of how to enforce a population maximum without atrocities if the ‘problem’ is reproduction rather than immigration - but here we are primarily concerned about whether the Long Reflection will produced the promised omelette, not how many eggs get cracked along the way.

What’s especially strange about this is that the motivating example - that this is “just as” antitrust laws deal with monopolies - is incorrect. Antitrust law does not make it illegal to be a monopoly (US, EU). What it does do is prohibit some ‘unfair’ methods for attempting to become a monopoly, and possibly some other types of conduct if you happen to become one. However, if you become a monopolist through legitimate means, like having much better technology or just operating much more efficiently (or, sadly, by getting the government to give you a monopoly) this is perfectly legal. If we applied this analogy to the Long Reflection case, it would suggest we should prohibit societies from gaining population through illegitimate means, like slavery, but accept it if occurred through legitimate methods, like immigration or natural procreation.


Investing a lot of effort into making sure we’re not making a lot of moral mistakes makes a lot of sense. However, while slowing growth and imposing statis might give us more time to think, they could also make us worse at thinking. It seems likely to me that the Long Reflection should be contemporaneous with economic advance and intergalactic colonisation, not sequentially prior.



New Comment
1 comment, sorted by Click to highlight new comments since: Today at 3:58 AM

Great post! The section "Truth-seeking requires grounding in reality" describes some points I've previously wanted to make but didn't have good examples for.

I discuss a few similar issues in my post The Moral Uncertainty Rabbit Hole, Fully Excavated. Instead of discussing "the Long Reflection" as MacAskill described it, my post there discusses the more general class of "reflection procedures" (could be society-wide or just for a given individual) where we hit pause and think about values for a long time. The post points out how reflection procedures change the way we reflect and how this requires us to make judgment calls about which of these changes are intended or okay. I also discuss "pitfalls" of reflection procedures (things that are unwanted and avoidable at least in theory, but might make reflection somewhat risky in practice). 

One consideration I discovered seems particularly underappreciated among EAs in the sense that I haven't seen it discussed anywhere. I've called it "lack of morally urgent causes." In short, I think high levels of altruistic dedication and people forming self-identities as altruists dedicated to a particular cause often come from a kind of desperation about the state of the world (see Nate Soares' "On Caring"). During the Long Reflection (or other "reflection procedures" more generally), the state of the world is assumed to be okay/good/taken care of. So, any serious problems are assumed to be mostly taken care of or put on hold. What results is a "lack of morally urgent causes" – which will likely affect the values and self-identities that people who are reflecting might form. That is, compared to someone who forms their values prior to the moral reflection, people in the moral reflection may be less likely to adopt identities that were strongly shaped by ongoing "morally urgent causes." For better or worse. This is neither good nor bad per se – it just seems like something to be aware of. 

Here's a longer excerpt from the post where I provide a non-exhaustive list of factors to consider for setting up reflection environments and choosing reflection strategies: 

Reflection strategies require judgment calls

In this section, I’ll elaborate on how specifying reflection strategies requires many judgment calls. The following are some dimensions alongside which judgment calls are required (many of these categories are interrelated/overlapping):

  • Social distortions: Spending years alone in the reflection environment could induce loneliness and boredom, which may have undesired effects on the reflection outcome. You could add other people to the reflection environment, but who you add is likely to influence your reflection (e.g., because of social signaling or via the added sympathy you may experience for the values of loved ones).
  • Transformative changes: Faced with questions like whether to augment your reasoning or capacity to experience things, there’s always the question “Would I still trust the judgment of this newly created version of myself?”
  • Distortions from (lack of) competition: As Wei Dai points out in this Lesswrong comment: “Current human deliberation and discourse are strongly tied up with a kind of resource gathering and competition.” By competition, he means things like “the need to signal intelligence, loyalty, wealth, or other ‘positive’ attributes.” Within some reflection procedures (and possibly depending on your reflection strategy), you may not have much of an incentive to compete. On the one hand, a lack of competition or status considerations could lead to “purer” or more careful reflection. On the other hand, perhaps competition functions as a safeguard, preventing people from adopting values where they cannot summon sufficient motivation under everyday circumstances. Without competition, people’s values could become decoupled from what ordinarily motivates them and more susceptible to idiosyncratic influences, perhaps becoming more extreme.
  • Lack of morally urgent causes: In the blogpost On Caring, Nate Soares writes: “It's not enough to think you should change the world — you also need the sort of desperation that comes from realizing that you would dedicate your entire life to solving the world's 100th biggest problem if you could, but you can't, because there are 99 bigger problems you have to address first.”
    In that passage, Soares points out that desperation can strongly motivate why some people develop an identity around effective altruism. Interestingly enough, in some reflection environments (including “My favorite thinking environment”), the outside world is on pause. As a result, the phenomenology of “desperation” that Soares described would be out of place. If you suffered from poverty, illnesses, or abuse, these hardships are no longer an issue. Also, there are no other people to lift out of poverty and no factory farms to shut down. You’re no longer in a race against time to prevent bad things from happening, seeking friends and allies while trying to defend your cause against corrosion from influence seekers. This constitutes a massive change in your “situation in the world.” Without morally urgent causes, you arguably become less likely to go all-out by adopting an identity around solving a class of problems you’d deem urgent in the real world but which don’t appear pressing inside the reflection procedure. Reflection inside the reflection procedure may feel more like writing that novel you’ve always wanted to write – it has less the feel of a “mission” and more of “doing justice to your long-term dream.”[11]
  • Ordering effects: The order in which you learn new considerations can influence your reflection outcome. (See page 7 in this paper. Consider a model of internal deliberation where your attachment to moral principles strengthens whenever you reach reflective equilibrium given everything you already know/endorse.)
  • Persuasion and framing effects: Even with an AI assistant designed to give you “value-neutral” advice, there will be free parameters in the AI’s reasoning that affect its guidance and how it words things. Framing effects may also play a role when interacting with other humans (e.g., epistemic peers, expert philosophers, friends, and loved ones).

Pitfalls of reflection procedures

There are also pitfalls to avoid when picking a reflection strategy. The failure modes I list below are avoidable in theory,[12] but they could be difficult to avoid in practice:

  • Going off the rails: Moral reflection environments could be unintentionally alienating (enormous option space; time spent reflecting could be unusually long). Failure modes related to the strangeness of the moral reflection environment include existential breakdown and impulsively deciding to lock in specific values to be done with it.
  • Issues with motivation and compliance: When you set up experiments in virtual reality, the people in them (including copies of you) may not always want to play along.
  • Value attacks: Attackers could simulate people’s reflection environments in the hope of influencing their reflection outcomes.
  • Addiction traps: Superstimuli in the reflection environment could cause you to lose track of your goals. For instance, imagine you started asking your AI assistant for an experiment in virtual reality to learn about pleasure-pain tradeoffs or different types of pleasures. Then, next thing you know, you’ve spent centuries in pleasure simulations and have forgotten many of your lofty ideals.
  • Unfairly persuasive arguments: Some arguments may appeal to people because they exploit design features of our minds rather than because they tell us  “What humans truly want.” Reflection procedures with argument search (e.g., asking the AI assistant for arguments that are persuasive to lots of people) could run into these unfairly compelling arguments. For illustration, imagine a story like “Atlas Shrugged” but highly persuasive to most people. We can also think of “arguments” as sequences of experiences: Inspired by the Narnia story, perhaps there exists a sensation of eating a piece of candy so delicious that many people become willing to sell out all their other values for eating more of it. Internally, this may feel like becoming convinced of some candy-focused morality, but looking at it from the outside, we’ll feel like there’s something problematic about how the moral update came about.)
  • Subtle pressures exerted by AI assistants: AI assistants trained to be “maximally helpful in a value-neutral fashion” may not be fully neutral, after all. (Complete) value-neutrality may be an illusory notion, and if the AI assistants mistakenly think they know our values better than we do, their advice could lead us astray. (See Wei Dai’s comments in this thread for more discussion and analysis.)