Robin Hanson criticises the notion of the Long Reflection in this article.

Some excerpts:
 

In our world today, many small local choices are often correlated, across both people and time, and across actions, expectations, and desires. Within a few decades, such correlated changes often add up to changes are which are so broad and deep that they could only be reversed at an enormous cost, even if they are in principle reversible. Such irreversible change is quite common, and not at all unusual. To instead prevent this sort of change over timescales of centuries or longer would require a global coordination that is vastly stronger and more intrusive than that required to merely prevent a few exceptional and localized existential risks, such as nuclear war, asteroids, or pandemics. Such a global coordination really would deserve the name “world government”.

Furthermore, the effect of preventing all such changes over a long period, allowing only the changes required to support philosophical discussions, would be to have changed society enormously, including changing common attitudes and values regarding change. People would get very used to a static world of value discussion, and many would come to see such a world as proper and even ideal. If any small group could then veto proposals to end this regime, because a strong consensus was required to end it, then there’s a very real possibility that this regime could continue forever.

While it might be possible to slow change in a few limited areas for limited times in order to allow a bit more time to consider especially important future actions, wholesale prevention of practically irreversible change over many centuries seems simply inconsistent with anything like our familiar world.


Hanson also refers to another article which is critical of the Long Reflection by Felix Stocker.

73

0
0

Reactions

0
0
Comments30
Sorted by Click to highlight new comments since: Today at 12:37 AM

While Hanson is correct that the Long Reflection is rather dystopian, his alternatives are worse, and his "Age of EM" gives plenty of examples from a hypothetical society that is more dystopian than the "Long Reflection". 

Hanson's scenario of "a very real possibility that this regime could continue forever" is certainly worrying, but I view it as an improvement over certain alternatives, namely, AGI destroying humanity, severe values-drift from unconstrained whole brain emulation + editing + economic pressures, and resultant S-risk type scenarios.

So I don't agree that a Long Reflection is worse than those alternatives.

More speculatively, I think he understates how prevalent status-quo bias is, here: 

"the effect of preventing all such changes over a long period, allowing only the changes required to support philosophical discussions, would be to have changed society enormously, including changing common attitudes and values regarding change. "

Most of human history is extremely static relative to history post the Industrial Revolution, so humanity seems  well-adapted to a static society. There are substantial political movements dedicated to slowing or reversing change. The average human attitude towards change is not uniformly positive!

I don't think Hanson would disagree with this claim (that the future is more likely to be better by current values, given the long reflection, compared to e.g. Age of Em). I think it's a fundamental values difference.

Robin Hanson is an interesting and original thinker, but not only is he not an effective altruist, he explicitly doesn't want to make the future go well according to anything like present human values.

The Age of Em, which Hanson clearly doesn't think is an undesirable future, would contain very little of what we value. Hanson says this, but it's a feature, not a bug. Scott Alexander:

Hanson deserves credit for positing a future whose values are likely to upset even the sort of people who say they don’t get upset over future value drift. I’m not sure whether or not he deserves credit for not being upset by it. Yes, it’s got low-crime, ample food for everybody, and full employment. But so does Brave New World. The whole point of dystopian fiction is pointing out that we have complicated values beyond material security. Hanson is absolutely right that our traditionalist ancestors would view our own era with as much horror as some of us would view an em era. He’s even right that on utilitarian grounds, it’s hard to argue with an em era where everyone is really happy working eighteen hours a day for their entire lives because we selected for people who feel that way. But at some point, can we make the Lovecraftian argument of “I know my values are provincial and arbitrary, but they’re my provincial arbitrary values and I will make any sacrifice of blood or tears necessary to defend them, even unto the gates of Hell?”

Since Hanson doesn't have a strong interest in steering the long-term future to be good by current values, it's obvious why he wouldn't be a fan of an idea like the long reflection, which has that as its main goal but produces bad side effects in the course of giving us a chance of achieving that goal. It's just a values difference.

I have values, and The Age of Em overall contains a great deal that I value, and in fact probably more of what I value than does our world today. 

Afaict there is a difference between the Long Reflection and Hanson's discussion about brain emulations, in that Hanson focuses more on prediction, whereas the debate on the Long Reflection is more normative (ought it to happen?).

If Hanson thinks WBE and his resultant predictions are likely barring some external event or radical change, and also doesn't favor a Long Reflection, isn't that equivalent to saying his scenario is more desirable than the Long Reflection?

I see an unconstrained Age of Em as better than an eternal long reflection. 

As you laid out in the post, your biggest concern about the long reflection is the likely outcome of a pause - is that roughly correct?

In other words, I understand your preferences are roughly: 
Extinction < Eternal Long Reflection < Unconstrained Age of Em < Century-long reflection followed by Constrained Age of Em < No reflection + Constrained Age of Em

(As an aside, I would assume that without changing the preference order, we could replace unconstrained versus constrained Age of Em with, say,  indefinite robust totalitarianism versus "traditional" transhumanist future.)

I don't have great confidence that the kinds of constraints that would be imposed on an age of em after a long reflection would actually improve that and further ages. 

Yes, you've mentioned your skepticism of the efficacy of a long reflection, but conditional on it successfully reducing bad outcomes, you agree with the ordering?

You 'll also need to add increasing good outcomes, along with decreasing bad outcomes.

The long reflection as I remember it doesn't have much to do with AGI destroying humanity, since AGI is something that on most timelines we expect to have resolved within the next century or two, whereas the long reflection was something Toby envisaged taking multiple centuries. The same probably applies to whole brain emulation.

This seems like quite an important problem for the long reflection case - it may be so slow a scenario that none of its conclusions will matter.

I feel on board with essentially everything in this article. I'm pretty confused by the popularity of the Long Reflection idea - it seems utterly impractical and unrealistic without major changes to how humans act, for eg the reasons outlined here. I feel like I must be misunderstanding or missing something?

One argument for the long reflection that I think has been missed in a lot of this discussion is that it's a proposal for taking Nick's Astronomical Waste argument (AWA) seriously. Nick argues that it's worth spending millennia to reduce existential risk by a couple percent. But launching for example, a superintelligence with the values of humanity in 2025 could itself constitute an existential risk, in light of future human values. So AWA implies that a sufficiently wise and capable society would be prepared to wait millennia before jumping in to such an action.

Now we may practically never be capable enough to coordinate to do so, but the theory makes sense.

Just spitballing, but I spontaneously don't find it completely unrealistic. Would a decades long moratorium on transformative AI count as a long reflection, which is prolonged until some consensus forms among world governments? That currently seems unthinkable, but if the right people in the US and Chinese governments would be convinced, and we would have lived through a global catastrophe due to misaligned AI systems, I wouldn't be surprised to see a global coordination to control advances in AI and scaling up alignment and ethical reflection research, research which would have significant sway over the future of humanity over decades.

I'm spontaneosly also not convinced that this moratorium would pose so significant costs on many relevant actors. Progress would continue, just slower. I.e. we would still eradicate poverty in the coming decades, make significant progress on fighting most major diseases, maybe even have just-not-quite transformative personal AI assistants, etc. Right? 

What's your preferred alternative?

I largely agree with Neel, and fwiw to me there isn't really 'an alternative'. Humanity, if it continues to exist as something resembling autonomous individuals, is going to bumble along as it always has and grand proposals to socially engineer it seem unlikely to work and incredibly dangerous if they do.

On the other hand, developing a social movement of concerned individuals with realistic goals that people new to it can empathise with, so that over time their concerns start to become mainstream, seems like a good marginal way of nudging long term experiences to be more positive.

I think "Humanity is going to bumble along as it always has" is not a realistic alternative; the Long Reflection is motivated by the worry that that won't happen by default. Instead, we'll all die, or end up in one of the various dystopian scenarios people talk about, e.g. the hardscrapple frontier, the disneyland with no children, some of the darker Age of Em stuff... (I could elaborate if you like). If we want humanity to continue bumbling on, we need to do something to make that happen, and the Long Reflection is a proposal for how to do that.

Well hence the caveat. It may not continue to exist, and encouraging it to do so seems valuable.

But the timescale Toby gave for the long reflection seems to take us to well over to the far side of most foreseeable x-risks, meaning a) it won't have helped us solve them, but rather will only be possible as a consequence of having done so and b) it might well exacerbate them if it turns out that the majority of risks are local, and we've forced ourselves to sit in one spot contemplating them rather than spread out to other star systems.

Hanson's concern seems to be an extension of b), where it ends up causing them directly, which also seems plausible.

[comment deleted]3y1
0
0

Btw, I object to using flowery jargon like 'the hardscrapple frontier, the disneyland with no children' that map to easily expressible concepts like 'subsistence living' and 'the extinction of consciousness'. It seems like virtue signalling at the expense of communication.

I strongly agree with the general sentiment about jargon and flowery language, though I think "disneyland with no children" is not equivalent to "extinction of consciousness" because (1) Bostrom wants to remain non-committal about the question of which things constitute a person's welfare and how these things relate to consciousness and (2) he is focused on cases in which, from the outside, it appears that people are enjoying very high welfare levels, when in fact they do not experience any welfare at all.

Ok, but if you were optimising for communicating that concept, is 'Disneyland with no children' really the phrase you'd use? You could spell it out in full or come up with a more literal pithy phrase.

Sorry, what virtue do you think is being signaled here? 

Mainly EA ingroupiness.

Hmmm, point taken. I do think this particular case was intended to serve, and does serve, a communicative purpose: If I just said "subsistence living or the extinction of consciousness" then you wouldn't have keywords to search for, whereas instead by giving these scenarios the names their authors chose, you can easily go read about them. I guess I didn't think things through enough; after all, it's annoying to have to go look things up and by name-dropping the scenarios I force you do do that. My apologies!

 

I appreciated the link to the hardscrapple frontier, which I had not heard of, FWIW.

It seems unlikely but not impossible given how strong status quo bias is among humans. NIMBY movement, reactionary and conservative politics in general, lots of examples of politics that call for less or no change. 

Humans have had periods of tens or hundreds of thousands of years where we stagnate and technology doesn't seem to change much, as far as we can tell from the archaeological record, so this isn't unprecedented. 

Just reading the excerpts, I feel like a lot of work is being done by the clause "even if they are in principle reversible". It seems to me like the long reflection should be compatible with making very many extremely hard to reverse (but in principle reversible) changes, so long as it maintains a core of truth-seeking governing its long-term direction, and maintains a hard line against changes that aren't even reversible in principle.

Of course if the idea is attracting such critiques that's a sign that it's not consistently being presented in a light that makes that clear.

It is a matter of the cost and coordination that would be required to reverse. If you allow these to be large enough, it isn't clear than choices are ever irreversible, besides total extinction.

I think that it is not possible to delay technological progress if there are strong near-term and/or egoistical reasons to accelerate the development of new technologies.

As an example, let us assume that it is possible to stop biological aging within a timeframe of 100 years. Of course, you can argue that this is an irreversible change, which may or may not be good for humankinds longterm future. But I do not think that it is realistic to say "Let's fund Alzheimer's research and senolytics, but everything that prolongs life expectancy beyond 120 years will be forbidden for the next millenia until we have figured out if we want to have a society of ageless people." 

On the other hand my argument does not rule out that it is possible to delay technologies which are very expensive to develop and which have no clear value from an egoistic point of view. 

Curated and popular this week
Relevant opportunities