OS

Obasi Shaw

8 karmaJoined

Posts
1

Sorted by New

Comments
1

(No worries for skimming, it's long!) Yeah, I guess there are two primary points that I'm trying to make in this post:

  1. That EA wants to become powerful enough to reshape the world in a way significant enough to feel dystopian to most humans (e.g. the contraception/eugenics/nuclear war examples), and that's why people are intuitively averse to it.
  2. That there is a significant risk that EA will become powerful enough to reshape the world in a way significant enough to feel dystopian to most humans.

The second point is what you're pushing back on, though note that I reframed it away from "significantly more advanced than those of the rest of the world combined" to "powerful enough to reshape the world in a way significant enough to feel dystopian to most humans."

I think this is a more accurate framing because it isn't at all likely that EA (or any other maximizer) will end up in a "me against the rest of the world combined" scenario before bringing about dystopian outcomes. If the world does manage to coordinate well enough to rally all of its resources against a maximizer, that would almost certainly only happen long after the maximizer has caused untold damage to humanity. Even now, climate change is a very clear existential risk causing tons of damage, and yet we still haven't really managed to rally the world's resources against it. (And if you're willing to entertain the idea that capitalism is emergent AI, it's easy to argue that climate change is just the embodiment of the capitalism AI's x-risk, and yet humanity is still incredibly resistant to efforts to fight it.)

Anyway, my point is that EA doesn't need to be more powerful than anything else in the world, just powerful enough to create dystopian outcomes, and powerful enough to overcome resistance when it tries to. I used the contraception/eugenics/nuclear-war examples because they demonstrate that it's relatively easy to start creating dystopian outcomes, and although EA doesn't have the type of power to accomplish those things today, it's not hard to see it getting there, even in the near-ish future.

You're right that the line in the post was misleading, so I've changed the line in the post to clarify this! It now reads "...unless their ability to accomplish their goals is advanced enough to overcome the inevitable resistance they'll meet on their way to creating that dystopia."

Also, to your point about extremely unusual self-limits: I think EA is an extremely unusual movement in that it is:

  1. Capable of amassing – and seeking to amass – huge amounts of capital/power. (People are literally throwing billions of dollars at EA in the hopes that it will improve the world, and this is currently increasing exponentially)
  2. Self-optimizing (i.e. not constrained by the intelligence of some human leader, but truly superintelligent through survival-of-the-fittest idea-generation)
  3. A maximizer (many other powerful organizations, like governments, aren't seeking to maximize any one thing, so they don't pose the same sort of existential threat)

For these reasons, I think it's worth considering that EA might be an extremely unusual social movement that needs to be treated in extremely unusual ways.