There's one thing history seems to have been trying to teach us: that the contents of the future are determined by power, economics, politics, and other conflict-theoritic matters.

Turns out, nope!

Almost all of what the future contains is determined by which of the two following engineering problems is solved first:

  • How to build a superintelligent AI (if solved first, everyone dies forever)
  • How to build an aligned superintelligent AI (if solved first, everyone gets utopia)

…and almost all of the reasons that the former is currently a lot more likely are mistake theory reasons.

The people currently taking actions that increase the probability that {the former is solved first} are not evil people trying to kill everyone, they're confused people who think that their actions are actually increasing the probability that {the latter is solved first}.

Now, sure, whether you're going to get a chance to talk with OpenAI/Deepmind/Anthropic's leadership enough to inform them that they're in fact making things worse is a function of economics and politics and the like. But ultimately, for the parts that really matter here, this is a matter of explaining, not of defeating.

And, sure, the implementation details of "utopia" do depend on who launches the aligned superintelligent AI, but I expect you'd be very happy with the utopia entailed by any of the possibilities currently on the table. The immense majority of the utility you're missing out on is from getting no utopia at all and everyone dying forever, rather than getting the wrong utopia implementation details.

The reason that the most likely outcome is that everyone dies forever, is that the people who get to impact which of those outcomes is going to happen are mistaken (and probably not thinking hard enough about the problem to realize that they're mistaken).

They're not evil and getting them to update to the correct logical beliefs is a matter of reason (and, if they're the kind of weak agents that are easily influenced by what others around them think, memetics) rather than a matter of conflict.

They're massively disserving everyone's interests, including their own. And the correct actions for them to take would massively serve their own interests as well as everyone else's. If AI kills everyone they'll die too, and if AI creates utopia they'll get utopia along with everyone else — and those are pretty much the only two attractors.

We're all in this together. Some of us are just fairly confused, not agentically pursuing truth, and probably have their beliefs massively biased by effects such as memetics. But I'm pretty sure nobody in charge is on purpose trying to kill everyone; they're just on accident functionally trying to kill everyone.

And if you're not using your power/money to affect which of those two outcomes is more likely to happen than the other, then your power/money is completely useless. They won't be useful if we all die, and they won't be useful if we get utopia. The only use for resources, right now, if you want to impact in any way what almost all of the future contains (except for maybe the next 0 to 5 years, which is about how long we have), is in influencing which of those two engineering problems is solved first.

This applies to the head of the major AI orgs just as much as it applies to everyone else. One's role in an AI org is of no use whatsoever except for influencing which of those two problems are solved first. The head of OpenAI won't particurly get a shinier utopia than everyone else if alignment is solved in time, and they won't particularly die less than everyone else if it isn't.

Power/money/being-the-head-of-OpenAI doesn't do anything post-singularity. The only thing which matters, right now, is which of those two engineering problems is solved first.

15

2
4

Reactions

2
4

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 12:33 PM

Agreed completely, and I should also add that this is the exact reason I'm against all AGI development in the near term (at least 50 years, likely longer). Right now, our society is still deeply confused and irrational, as you put it, and our currently most powerful social systems exacerbate this. I think our primary purpose, right now, should be social and moral development, not new technology. Once we figure out global peace and governance, economics not based on benefit to a small elite at the expense of literally everyone else, and food systems without animal death and suffering, then we can start tackling the problem of AGI / ASI. We need a global pause on AI development that lasts decades, excluding strictly narrow AI with no AGI potential (think Stockfish or Alexa). And this pause must be strictly enforced by governments worldwide, otherwise it is likely to increase x-risk by driving AI research underground, affiliated with criminal groups and rogue actors, etc.

Unfortunately, that means that if you are reading this (and are over the age of 10 or so), you will live, work, age, and die like our human ancestors have for millennia. If you're currently in your 30s, as I am, your grandchildren may see ASI utopia, but you will work, struggle, and decline just like your own parents and grandparents. This is a terrible thing to accept, but if you are truly rational (even if you aren't an Effective Altruist) you must accept it. The threat of creating a misaligned superintelligence is just too great to press forward now. 

With all this said, I think the chance of an effective AI ban actually coming to fruition are effectively zero, and an ineffective ban is likely to actually increase x-risk by driving research and development underground. So we're just going to have to live with a high level of x-risk (hopefully we will keep LIVING with it) for the foreseeable future. 

Curated and popular this week
Relevant opportunities