Robin Hanson criticises the notion of the Long Reflection in this article.
Some excerpts:
In our world today, many small local choices are often correlated, across both people and time, and across actions, expectations, and desires. Within a few decades, such correlated changes often add up to changes are which are so broad and deep that they could only be reversed at an enormous cost, even if they are in principle reversible. Such irreversible change is quite common, and not at all unusual. To instead prevent this sort of change over timescales of centuries or longer would require a global coordination that is vastly stronger and more intrusive than that required to merely prevent a few exceptional and localized existential risks, such as nuclear war, asteroids, or pandemics. Such a global coordination really would deserve the name “world government”.
Furthermore, the effect of preventing all such changes over a long period, allowing only the changes required to support philosophical discussions, would be to have changed society enormously, including changing common attitudes and values regarding change. People would get very used to a static world of value discussion, and many would come to see such a world as proper and even ideal. If any small group could then veto proposals to end this regime, because a strong consensus was required to end it, then there’s a very real possibility that this regime could continue forever.
While it might be possible to slow change in a few limited areas for limited times in order to allow a bit more time to consider especially important future actions, wholesale prevention of practically irreversible change over many centuries seems simply inconsistent with anything like our familiar world.
Hanson also refers to another article which is critical of the Long Reflection by Felix Stocker.
One argument for the long reflection that I think has been missed in a lot of this discussion is that it's a proposal for taking Nick's Astronomical Waste argument (AWA) seriously. Nick argues that it's worth spending millennia to reduce existential risk by a couple percent. But launching for example, a superintelligence with the values of humanity in 2025 could itself constitute an existential risk, in light of future human values. So AWA implies that a sufficiently wise and capable society would be prepared to wait millennia before jumping in to such an action.
Now we may practically never be capable enough to coordinate to do so, but the theory makes sense.