Hide table of contents

We must begin our serious discussion of fatal risks from the basic premise that we may need to change our strategies.

The Misstep Named Balance

Some people cannot escape the concept of balance when thinking about risks.

Balance refers to a rational strategy when there's a loss on one side and a benefit on the other.

It aims to balance both sides by giving a slight advantage to the benefits. If that balance can be maintained over time, even if losses occur occasionally, it indicates that overall gain is being achieved.

This model cannot handle fatal risks. Balance should only be considered a rational strategy if even a partial failure allows for an overall benefit. Walking a dangerous path with a cliff on the right side, in the name of maintaining balance, is not a rational strategy. It's reckless and by no means rational.

The term balance becomes a misstep when it comes to fatal risks. The wise do not approach fatal risks.

The Misstep Named Risk Management

Some people cannot escape the concept of risk management when thinking about risks.

Risk management refers to a rational strategy that involves managing the likelihood and impact of risks to minimize them while maximizing benefits.

If you manage to minimize risks and maximize benefits, you can eventually gain significant benefits. This way of thinking assumes that even if the minimized risk materializes, it's acceptable as long as the expected value favors benefits.

This model also cannot handle fatal risks. When risks materialize, no matter how much profit is made, it will always end up as a loss. Even if the risk probability is low, it's challenging to quantify the impact of fatal risks. Therefore, we cannot calculate the expected value. Thus, dealing with fatal risks through risk management is not a rational strategy. It's arrogant and by no means rational.

The term risk management becomes a misstep when it comes to fatal risks. The wise do not approach fatal risks.

The Misstep Named Risk Minimization

Some people cannot escape the concept of risk minimization when thinking about risks.

The idea is that it's good if risks can be minimized. However, this way of thinking lacks the concept of reducing risks to zero.

Fatal risks demand that risk zero is the baseline. If you cannot reduce fatal risks to zero, it should not be because you desire benefits. That would lead to the misstep named risk management. The only cases in which fatal risks cannot be reduced to zero are when other fatal risks increase. Only when suppressing other fatal risks does taking on fatal risks become rational.

By excluding benefits not involving fatal risks and considering other fatal risks, minimizing overall fatal risks becomes a rational strategy.

The term risk minimization becomes a misstep when faced with a single fatal risk. The wise only consider risk minimization when facing multiple fatal risks.

The Gap Between Impossible and Difficult

Sometimes people say that achieving risk zero is difficult. Indeed, it is difficult. But if we look at that fatal risk alone, it's not impossible. If we don't take the risk, the risk of the adventure is zero.

On the other hand, some people suggest that we should strive to keep fatal risks from materializing by controlling them. However, it's almost never guaranteed that we can control everything completely. Overlooking risk modes, human errors, and the emergence of unexpected new risks make it nearly impossible to handle everything flawlessly in most cases.

We need to focus on the gap between the words difficult and impossible. They are entirely different. Difficulties can be overcome. But what is impossible cannot be made possible. This difference is vast. Exceedingly so.

Therefore, while the difficulty of avoiding risk and the impossibility of full risk control may seem similar in wording, it's crucial to understand that there's a decisive break between them in reality.

The Dichotomy between Logical Impossibility and Empirical Impossibility

There are two types of 'impossible': logically impossible and empirically impossible.

In the case of logical impossibility, it is absolutely never possible.

However, empirical impossibility might be logically possible. The probability is not zero. In other words, it can be rephrased as 'difficult'.

The dichotomy between these two types of impossibility is larger than the impression of the word itself. Therefore, when faced with the word 'impossible', it's essential to clearly distinguish between these two.

The Dual Difficulty in Dealing with Existential Risks

When it's difficult to both cease the adventure to make the existential risk zero and also take measures while adventuring to make the existential risk zero, the best strategy becomes to exert effort on both fronts.

In particular, the effort to stop the adventure is important. Stopping the adventure is essentially a one-time difficulty. On the other hand, taking measures while continuing the adventure is an endlessly continuing effort, where the risk is always required to be kept at zero. Clearly, the former is a more rational risk measure.

Judgment by Values, Decision by Resolve, or a Strategy of Ignoring

On the other hand, we are frequently exposed to fatal risks with very low probabilities in our daily lives. We don't always act based solely on risk rationality.

When we cross the street, get in a car, train, airplane, or boat, enjoy sports or outdoor activities, or enjoy alcohol and delicious food. When we go to unknown places or intentionally ignore the doctor's advice. We expose ourselves to fatal risks with minute probabilities.

This is different from risk balance or risk management. It's likely one of the following three:

Judgment by Values

Even if our lifespan might be shortened a bit, enjoying alcohol and delicious food is part of life. Sports, outdoor activities, and visiting unknown places enrich our lives. What's the point of living long by giving up enjoyment? We take risks with such a value perspective on life and enjoyment.

Decision by Resolve
Getting on an airplane can be a little nerve-wracking and scary. However, if something happens, so be it. Although it might be small, we take risks based on such resolve.

A Strategy of Ignoring
Living while considering all risks is difficult. Humans are not made to live while paying attention to all worries. Risks with minute probabilities associated with things everyone does as a matter of course, or things repeated many times in the past, have become invisible. We expose ourselves to fatal risks with such a strategy.

Ethics in Fatal Risks

The above is the basic literacy we should understand beforehand when thinking about fatal risks.

With this literacy as a prerequisite, we can finally have a sincere discussion about risks. Especially in the case of fatal risks, if we don't base our discussions on this literacy, we will not be able to communicate effectively.

Bringing up balance, risk management, and risk minimization in discussions as if they were rational strategies, while having this literacy, is neither wise nor conscientious. Also, discussing difficulties and impossibilities as if they were on the same level is neither rational nor responsible.

If an adventure is absolutely necessary, we need to ask those affected by the fatal risks for judgments based on their values and decisions based on their resolve. Of course, based on the concept of informed consent, it's a prerequisite to carefully and clearly explain what we know and what we don't know about the risks.

Neglecting this, venturing on our own judgment, or pretending to be responsibly handling risks with expressions like balance, risk management, and risk minimization, is undoubtedly an act against ethics.

The former violates respect for human rights, while the latter violates the duty of care of a good administrator.
 

Comments


No comments on this post yet.
Be the first to respond.
Curated and popular this week
 ·  · 10m read
 · 
I wrote this to try to explain the key thing going on with AI right now to a broader audience. Feedback welcome. Most people think of AI as a pattern-matching chatbot – good at writing emails, terrible at real thinking. They've missed something huge. In 2024, while many declared AI was reaching a plateau, it was actually entering a new paradigm: learning to reason using reinforcement learning. This approach isn’t limited by data, so could deliver beyond-human capabilities in coding and scientific reasoning within two years. Here's a simple introduction to how it works, and why it's the most important development that most people have missed. The new paradigm: reinforcement learning People sometimes say “chatGPT is just next token prediction on the internet”. But that’s never been quite true. Raw next token prediction produces outputs that are regularly crazy. GPT only became useful with the addition of what’s called “reinforcement learning from human feedback” (RLHF): 1. The model produces outputs 2. Humans rate those outputs for helpfulness 3. The model is adjusted in a way expected to get a higher rating A model that’s under RLHF hasn’t been trained only to predict next tokens, it’s been trained to produce whatever output is most helpful to human raters. Think of the initial large language model (LLM) as containing a foundation of knowledge and concepts. Reinforcement learning is what enables that structure to be turned to a specific end. Now AI companies are using reinforcement learning in a powerful new way – training models to reason step-by-step: 1. Show the model a problem like a math puzzle. 2. Ask it to produce a chain of reasoning to solve the problem (“chain of thought”).[1] 3. If the answer is correct, adjust the model to be more like that (“reinforcement”).[2] 4. Repeat thousands of times. Before 2023 this didn’t seem to work. If each step of reasoning is too unreliable, then the chains quickly go wrong. Without getting close to co
 ·  · 1m read
 · 
(Audio version here, or search for "Joe Carlsmith Audio" on your podcast app.) > “There comes a moment when the children who have been playing at burglars hush suddenly: was that a real footstep in the hall?”  > > - C.S. Lewis “The Human Condition,” by René Magritte (Image source here) 1. Introduction Sometimes, my thinking feels more “real” to me; and sometimes, it feels more “fake.” I want to do the real version, so I want to understand this spectrum better. This essay offers some reflections.  I give a bunch of examples of this “fake vs. real” spectrum below -- in AI, philosophy, competitive debate, everyday life, and religion. My current sense is that it brings together a cluster of related dimensions, namely: * Map vs. world: Is my mind directed at an abstraction, or it is trying to see past its model to the world beyond? * Hollow vs. solid: Am I using concepts/premises/frames that I secretly suspect are bullshit, or do I expect them to point at basically real stuff, even if imperfectly? * Rote vs. new: Is the thinking pre-computed, or is new processing occurring? * Soldier vs. scout: Is the thinking trying to defend a pre-chosen position, or is it just trying to get to the truth? * Dry vs. visceral: Does the content feel abstract and heady, or does it grip me at some more gut level? These dimensions aren’t the same. But I think they’re correlated – and I offer some speculations about why. In particular, I speculate about their relationship to the “telos” of thinking – that is, to the thing that thinking is “supposed to” do.  I also describe some tags I’m currently using when I remind myself to “really think.” In particular:  * Going slow * Following curiosity/aliveness * Staying in touch with why I’m thinking about something * Tethering my concepts to referents that feel “real” to me * Reminding myself that “arguments are lenses on the world” * Tuning into a relaxing sense of “helplessness” about the truth * Just actually imagining differ
JamesÖz
 ·  · 3m read
 · 
Why it’s important to fill out this consultation The UK Government is currently consulting on allowing insects to be fed to chickens and pigs. This is worrying as the government explicitly says changes would “enable investment in the insect protein sector”. Given the likely sentience of insects (see this summary of recent research), and that median predictions estimate that 3.9 trillion insects will be killed annually by 2030, we think it’s crucial to try to limit this huge source of animal suffering.  Overview * Link to complete the consultation: HERE. You can see the context of the consultation here. * How long it takes to fill it out: 5-10 minutes (5 questions total with only 1 of them requiring a written answer) * Deadline to respond: April 1st 2025 * What else you can do: Share the consultation document far and wide!  * You can use the UK Voters for Animals GPT to help draft your responses. * If you want to hear about other high-impact ways to use your political voice to help animals, sign up for the UK Voters for Animals newsletter. There is an option to be contacted only for very time-sensitive opportunities like this one, which we expect will happen less than 6 times a year. See guidance on submitting in a Google Doc Questions and suggested responses: It is helpful to have a lot of variation between responses. As such, please feel free to add your own reasoning for your responses or, in addition to animal welfare reasons for opposing insects as feed, include non-animal welfare reasons e.g., health implications, concerns about farming intensification, or the climate implications of using insects for feed.    Question 7 on the consultation: Do you agree with allowing poultry processed animal protein in porcine feed?  Suggested response: No (up to you if you want to elaborate further).  We think it’s useful to say no to all questions in the consultation, particularly as changing these rules means that meat producers can make more profit from sel