Max Reddel

Program Lead | Advanced AI @ International Center for Future Generations
154 karmaJoined Jan 2022
icfg.eu

Sequences
1

Decision-Making under Deep Uncertainty

Comments
11

Topic contributions
2

Thank you for your thoughtful comment! You've highlighted some of the key aspects that I believe make this kind of AI governance model potentially valuable. I too am eager to see how these concepts would play out in practice.

Firstly, on specifying real exogenous uncertainties, I believe this is indeed a crucial part of this approach. As you rightly pointed out, uncertainties around AI development timelines, takeoff speeds, and others are quite significant. A robust AI governance framework should indeed have the ability to effectively incorporate these uncertainties. 

Regarding policy levers, I agree that an in-depth understanding of different AI safety research agendas is essential. In my preliminary work, I have started exploring a variety of such agendas. The goal is not only to understand these different research directions but also to identify which might offer the most promise given different (but also especially bleak) future scenarios.

In terms of relations between different entities like AI labs, governments, etc., this is another important area I'm planning to looking into. The nature of these relations can significantly impact the deployment and governance of AI, and we would need to develop models to help us better understand these dynamics.

Regarding performance metrics like p(doom), I'm very much in the early stages of defining and quantifying these. It's quite challenging because it requires balancing a number of competing factors. Still, I'm confident that our approach will eventually enable us to develop robust metrics for assessing different policy options. An interesting notion here is that it p(doom), is quite an aggregated variable. The DMDU approach would provide us with the opportunity to have a set of (independent) metrics that we can attempt to optimize all at the same time (think Pareto-optimality here). 

As to the conclusions and the implications for prioritization decisions, it's hard to say before running optimizations or simulations, let alone before formally modeling. Given the nature of ABMs for example, we would expect emerging macro-behaviors and phenomena that arise from the defined micro-behaviors. Feedback and system dynamics are something to discover when running the model. That makes it very hard to predict what we would likely see. However, given that Epoch (and maybe others) are working on finding good policies themselves, we could include these policies into our models as well and check whether different modeling paradigms (ABMs in this case) yield similar results. Keep in mind that this would entail simply running the model with some policy inputs. There is no (multi-objective) optimization involved at this stage yet. The optimization in combination with subsets of vulnerable scenarios would add even more value to using models.

In terms of attempts to implement this approach, there is nothing out there. Previous modeling on AI governance has mostly focused on traditional modeling, e.g. (evolutionary) game theory and neoclassical macro-economic models (with e.g. Nordhaus' DICE). At the moment, there are simple first attempts to use complexity modeling for AI governance. More could be done. Beyond the pure modeling, decision-making under deep uncertainty as a framework has not been used for such purposes yet.  

What I would love to see is that there is more awareness of such methodologies, their power, and potential usefulness. Ideally, some (existing or new) organization would get some funding and pick up the challenge of creating such models and conducting the corresponding analyses. I strongly believe, this could be very useful. For example, the Odyssean Institute (which I'm a part of as well) has the intention to apply this methodology (+ more) to a wide range of problems. If particular funding would be available for AI governance application, I'm sure, they would go for it. 

Fair point! The article is very long. Given that this post is discussing less theory but rather teases an abstract application, it seemed originally hard to summarize. I updated the article and added a summary.

Great question! There are indeed plenty of papers using systems modeling and there are various papers using decision-making under deep uncertainty. It's not always a combination of both though. 

With respect to systems modeling, it might be interesting to look into economics in particular. Traditional economic modeling techniques, such as equilibrium models have certain limitations when it comes to addressing complex economic problems. These limitations arise from the assumptions and simplifications that these models make, which can be inadequate for representing the intricacies of real-world economic systems. Some are mentioned in the post above (rationality and perfect information, homogeneity, static equilibrium, linearity, and additivity. Agent-based modeling offers a better fit for analyzing complex economic problems because it addresses many of the limitations of traditional models. In simple terms, in traditional economic modeling (equilibrium models and standard game theory), it is inherently impossible to account for actual market phenomena (e.g. the emergence of market psychology, price bubbles, let alone market crashes). The Santa Fe Institute has produced some very valuable work on this.  I would recommend reading Foundations of complexity economics by W. Arthur Brian. Other books and papers of his are excellent as well. 

Some papers using the methodology of decision-making under deep uncertainty (DMDU):

I hope that these pointers help a bit!

Yes, I think that in this sense, it fits rather well! :)

Good point! I think this is also a matter of risk aversion. How severe is it to get to a state of -500 utils? If you are very risk-averse, it might be better to do nothing. But I cannot make such a blanket statement.

I'd like to emphasize at this point that the DMDU approach is trying to avoid to 

  • test the performance of a set of policies for a set number of scenarios,
  • decide how likely each scenario is (this is the crux), and
  • calculate some weighted average for each policy.

Instead, we use DMDU to consider the full range of plausible scenarios to explore and identify particularly vulnerable scenarios. We want to pay special attention to these scenarios and find optimal and robust solutions for them. Like this, we cover tail risks which is quite in line IMO with mitigation efforts of GCRs, x-risks, and s-risks. 

A friend of mine just mentioned to me that the following points could be useful in the context of this discussion.

What DMDU researchers are usually doing  is to use uniform probability distributions for all parameters when exploring future scenarios. This approach allows for a more even exploration of the plausible space, rather than being overly concerned with subjective probabilities, which may lead to sampling some regions of input-output space less densely and potentially missing decision-relevant outcomes. The benefit of using uniform probability distributions is that it can help to avoid compounding uncertainties in a way that can lead to biased results. When you use a uniform distribution, you assume that all values are equally likely within the range of possible outcomes. This approach can help to ensure that your exploration of the future is more comprehensive and that you are not overlooking important possibilities. Of course, there may be cases where subjective probabilities are essential, such as when there is prior knowledge or data that strongly suggests certain outcomes are more likely than others. In such cases, I'd say that it may be appropriate to incorporate those probabilities into the model.

Also, this paper by James Derbyshire on probability-based versus plausibility-based scenarios might be very relevant. The underlying idea of plausibility-based scenarios is that any technically possible outcome of a model is plausible in the real world, regardless of its likelihood (given that the model has been well validated). This approach recognizes that complex systems, especially those with deep uncertainties, can produce unexpected outcomes that may not have been considered in a traditional probability-based approach. When making decisions under deep uncertainty, it's important to take seriously the range of technically possible but seemingly unlikely outcomes. This is where the precautionary principle comes in (which advocates for taking action to prevent harm even when there is uncertainty about the likelihood of that harm). By including these "fat tail" outcomes in our analysis, we are able to identify and prepare for potentially severe outcomes that may have significant consequences. Additionally, nonlinearities can further complicate the relationship between probability and plausibility. In some cases, even a small change in initial conditions or inputs can lead to drastic differences in the final outcome. By exploring the range of plausible outcomes rather than just the most likely outcomes, we can better understand the potential consequences of our decisions and be more prepared to mitigate risks and respond to unexpected events.

I hope that helps!

If you truly know absolutely and purely nothing about a probability distribution—which almost never happens

I would disagree with this particular statement. I'm not saying the opposite either. I think, it's reasonable in a lot of cases to assume some probability distributions. However, there are lot of cases, where we just do not know at all. E.g., take the space of possible minds. What's our probability distribution of our first AGI over this space? I personally don't know. Even looking at binary events – What's our probability distribution for AI x-risk this century? 10%? I find this widely used number implausible. 

But I agree that we can try gathering more information to get more clarity on that. What is often done in DMDU analysis is that we figure out that some uncertainty variables don't have much of an impact on our system anyway (so we fix the variables to some value) or that we constrain their value ranges to focus on more relevant subspaces. The DMDU framework does not necessitate or advocate for total ignorance. I think, there is room for an in-between.

Great to see people digging into the crucial assumptions! 

In my view, @MichaelStJules makes great counter points to @Harrison Durland's objection. I would like to add to further points.

  1. The notion of 1/n probability breaks kind of down if you look an infinite number of scenarios or uncertainty values (if you talk about one particular uncertain variable). For example, let's take population growth in economic models.  Depending on your model and potential sensitivities to initial conditions, the resolution of this variable matters. For some context, the current population growth is at 1.1% per annum. But we might be uncertain about how this will develop in the future. Maybe 1.0%? Maybe 1.2%? Maybe that the resolution of 0.1% is enough. And this case, what range would feel comfortable to put a probability distribution over? [0.6, 1.5] maybe? So, that n=10 and with a uniform distribution, you get 1.4% population growth to be 10% likely? But what if minor changes are important? You end up with an infinite number of potential values – even if you restrict the range of possible values. How do we square this situation with the 1/n approach? I'm uncertain. 
  2. My other point is more a disclaimer. I'm not advocating for throwing out expected-utility thinking completely. And I'm still a Bayesian at heart (which sometimes means that I pull numbers out my behind^^). My point is that it is sometimes problematic to use a model, run it in a few configurations (i.e. for a few scenarios), calculate a weighted average of the outcomes and call it a day. This is especially problematic if we look at complex systems and models in which non-linearities are compounding quickly. If you have 10 uncertainty variables, each of them of type float with huge ranges of plausible values, how do you decide what scenarios (points in uncertainty space) to run? Posteriori weighted averaging likely fails to capture the complex interactions and the outcome distributions. What I'm trying to say is that I'm still going to assume probabilities and probability distributions in daily life. And I will still conduct expected utility calculations. However, when things get more complex (e.g. in model land), I might advocate for more caution.

And thank you for sharing your thoughts on this matter! 

Indeed, focusing on hyper-unrealistic worst cases would be counter-productive in my view. Your arbitrary-objection is a justified one! However, there is some literature in this field engaging with the question of how to select or discover relevant scenarios. I think, there are some methods and judgment calls to handle this situation more gracefully than just waving one's hand. 

And I agree with your statement and linked post that the weaker claim is easier to accept. In practice, I would probably still go beyond and use decision-making under deep uncertainty to inform the policy-making process. This seems to be a still better approach than the default. 

What you can also do is still handcrafting scenarios as it is done for climate change mitigation. You have various shared socioeconomic pathways (SSPs) created by a lot of experts. You could attempt to find policy solutions that would perform sufficiently well for SSP4 or SSP5. But in the end, it might be just very useful to involve the stakeholders and let them co-decide these aspects.

Load more