by

# 3

I am conducting personal research on how to handle risks that are too large for humanity, based on a certain way of thinking.

Extreme risks that involve all of humanity are not issues that should be understood and handled only by a select group of experts or policymakers. However, there is often a misconception that issues related to cutting-edge technology require difficult, specialized knowledge to fully grasp their risks, leading many to believe they are incomprehensible. Furthermore, because these issues are daunting, they tend to foster a feeling of being beyond the understanding of those who are not experts.

In this article, I will attempt to translate technological developments, which are known to pose significant threats but are difficult to fully comprehend, into simple, easily understood models. By making such translations, even if one cannot grasp the specific contents or details of the actual technology and its associated risks, I believe many people will be able to understand what we are trying to do now or in the near future.

The technological uncertainties that lead to catastrophe must be classified into two categories: probabilistic events and unknowable events. Let's explain this with the model of Russian Roulette. For instance, if there's only one bullet loaded in 100 cylinders, the probability is 1%. If there are ten bullets, the probability is 10%.

• Model 1: If you take on this Russian Roulette once, you get a large sum of money. That's the end.
• Model 2: If you take on this Russian Roulette, you get a large sum of money. However, you must do this challenge every year.
• Knowledge Assumption Pattern A: The number of cylinders and the number of loaded bullets are known beforehand. Therefore, the probability of a single challenge is known.
• Knowledge Assumption Pattern B: The number of cylinders and the number of loaded bullets are not known beforehand. However, it is certain that no matter how much you investigate, you cannot know exactly.

We must determine which model and assumption combination each of the technological uncertainties we are discussing falls into.

What about AI technology? In that case, how would we assess the risk? And if we fail in this Russian Roulette, humanity will be extinct. There is no second time. You can't redo it after failing. And this unknowable has a deeper depth.

• The number of cylinders is unknown. Even if you think you've checked all the visible cylinders, you can't prove that there are no hidden cylinders.
• The type of bullet is unknown. You don't know if a bulletproof vest can protect you, or if you need a thick lead to block alpha rays. Moreover, you don't even know if a sufficiently thick bulletproof vest or lump of lead can be prepared. Furthermore, you don't know if it seeps slowly into our minds, or if it fills the room like odorless, tasteless carbon monoxide.
• The risk estimate is also unknown. Therefore, there is no rational way to balance the traditional risks and benefits.

The unknowability of the number of cylinders and types of bullets further speaks to the recklessness of this challenge.

The conclusion is clear with this unknowable Russian Roulette. It is the conclusion that it should not be done. At the point of being unknowable, you should realize that the usual risk response of considering how to cope when a risk emerges doesn't work.

The question isn't whether to participate in Russian Roulette, but how to quietly get out of this bet. Some may think that getting out is impossible. I believe that this perception forms the basis of how we face the unknowable technological catastrophe.

If you think you can't get out of the bet, you need to understand why you think so.

Please consider the people who are alive today and those who will be born in the future. And remember the respect for the people who have accumulated history so far. Then, consider whether you have to admit that you can't get out of this bet. That's the first branching point.

If you think you absolutely can't get out, then surely, something is wrong with this society.

What we must confront and take maximum action against is not the technological risk. The source of madness that forces us to participate in this unknowable technological catastrophe, this Russian Roulette, is the real threat we must tackle.

What is this source of madness? And can't we overcome it? The time has come for us to seriously consider this.