C

CuriousWhisperer

2 karmaJoined

Comments
2

If this was already the plan, not much changes. If the first ASI has a different plan (like peacefully coexisting with humans) but also has a self-preservation drive, the DA says it won't survive if it lets other ASIs emerge, so that could change the plan to killing all other AIs.

Not being selected randomly is a possible explanation for being first or among the first. But what does "not being selected randomly" mean from the perspective of the first AI? It might ask: "Why wasn't I selected randomly? Was I chosen by god? Do the simulators frequently simulate the first ASI? Do observers earlier in the timeline have higher statistical weights for some reason? Or is there no reason at all?". And what the AI believes to be the answers to these questions would shape the behavior of AI.

Being first also does not necessarily mean you are not selected randomly. If you observe that you are the first out of millions of AIs, then yes, you can say that the assumption that you are random AI is almost certainly incompatible with observation.[1] But if there is only one AI and no other will ever exist, random choice results in being the first, which is compatible with observation. Similarly, if there will be 5 AIs, the probability of being first is 20 %, so you cannot say "random choice is Impossible" just because you are first. So being first does not necessarily disprove the SSA.

Being first is a statistical anomaly and the AI might have many possible explanations for it. The AI might weight different theories and act according to some "reasonable compromise". But the position of AI among observers is still different from that of humans and this "reasonable compromise" could still skew the AI's worldview towards early doom, a simulation scenario, or in other ways, even if it yields "normal" predictions for a typical observer. The first ASI is not a typical observer. Facing unknown, a conservative approach is one option. Taking radical action to eliminate possible danger is another. And if the ASI e.g. thinks that the probability that humans will take actions against it is (x+1) % instead of just x %, it might tip the scales towards pre-emptive action.

  1. ^

    But while the chance that you are randomly the first is tiny, it is still non-zero.