Arepo

Sequences

Improving EA tech work

Wiki Contributions

Comments

Robin Hanson on the Long Reflection

Ok, but if you were optimising for communicating that concept, is 'Disneyland with no children' really the phrase you'd use? You could spell it out in full or come up with a more literal pithy phrase.

Robin Hanson on the Long Reflection

Btw, I object to using flowery jargon like 'the hardscrapple frontier, the disneyland with no children' that map to easily expressible concepts like 'subsistence living' and 'the extinction of consciousness'. It seems like virtue signalling at the expense of communication.

Robin Hanson on the Long Reflection

Well hence the caveat. It may not continue to exist, and encouraging it to do so seems valuable.

But the timescale Toby gave for the long reflection seems to take us to well over to the far side of most foreseeable x-risks, meaning a) it won't have helped us solve them, but rather will only be possible as a consequence of having done so and b) it might well exacerbate them if it turns out that the majority of risks are local, and we've forced ourselves to sit in one spot contemplating them rather than spread out to other star systems.

Hanson's concern seems to be an extension of b), where it ends up causing them directly, which also seems plausible.

Robin Hanson on the Long Reflection

I largely agree with Neel, and fwiw to me there isn't really 'an alternative'. Humanity, if it continues to exist as something resembling autonomous individuals, is going to bumble along as it always has and grand proposals to socially engineer it seem unlikely to work and incredibly dangerous if they do.

On the other hand, developing a social movement of concerned individuals with realistic goals that people new to it can empathise with, so that over time their concerns start to become mainstream, seems like a good marginal way of nudging long term experiences to be more positive.

Robin Hanson on the Long Reflection

The long reflection as I remember it doesn't have much to do with AGI destroying humanity, since AGI is something that on most timelines we expect to have resolved within the next century or two, whereas the long reflection was something Toby envisaged taking multiple centuries. The same probably applies to whole brain emulation.

This seems like quite an important problem for the long reflection case - it may be so slow a scenario that none of its conclusions will matter.

Honoring Petrov Day on the EA Forum: 2021

Yeah, if precommitment is to be distinguished from regular 'intending to do a thing' or 'stating such intention', it must be ripping out your steering wheel in a game of chicken.

Making a promise not to something I didn't intend to - and where doing it would already harm me socially - doesn't seem to add much beyond the value of stating my intentions (and the statement could still be a lie).

Honoring Petrov Day on the EA Forum: 2021

All of this seems consistent with Peter's pledge to second strike being +EV, as long as he's lying.

Honoring Petrov Day on the EA Forum: 2021

I can't parse the concept of 'precommitment'. I don't intend to launch a first strike, but maybe something will happen in the next few hours to change my intention, and I don't have any way to restructure my brain to reduce that possibility to 0. The reverse applies for second striking.

Load More