# Model uncertainty

Model uncertainty is uncertainty surrounding a model itself, including the model's internal uncertainty estimates.

A useful model is one that is simple enough to be analyzed easily, while nevertheless being similar enough to reality that this analysis can be used as a basis for predictions about the actual world. Unfortunately, it can be difficult to judge whether a given model is in fact similar enough. Furthermore, even if some predictions based on a model come true, this does not necessarily mean that next prediction based on the model will also come true.

A classic illustration of the importance of using appropriate models, as well as the difficulty of noticing when a model is inappropriate, is the 2007 financial crisis. In the years leading up to the crisis, many financial actors made investment decisions on the basis of models that assumed economic stability. Once this simplifying assumption ceased to hold, it became clear that their models had not sufficiently matched reality, and that the outcome of their decisions would be disastrous.

One strategy for dealing with uncertainty about the appropriateness of models is to construct and weight the predictions of multiple diverse models, rather than relying on a single one. However, in cases of radical uncertainty, not even this method may be enough. It may be that we think that there is a chance that none of the models that we have been able to generate is appropriate, and that we need to factor in what could happen if that were the case. Obviously, it is very hard to say something about such an uncertain case, but it may be possible to say some things. For instance, in their paper "Probing the Improbable," Toby Ord, Rafaela Hillerbrand, and Anders Sandberg argue that in cases where our models about certain low probability, high-risk events - such as existential risks - are wrong, the chance of disaster may be substantially higher than if the models are right.

## Uncertainty within models

When using a model to make estimates, we will often be uncertain about what values the model's numerical parameters should have.

For example, if we decide to use 80,000 Hours' three-factor framework for selecting cause areas, we may be unsure of what value to assign to a given cause area's tractability, or if we are attempting to to estimate the value of a donation to a bednet charity, we may be unsure how many cases of malaria are prevented per bednet distributed.

It is important to make such uncertainty clear, both so that our views can be be more easily challenged and improved by others and so that we can derive more nuanced conclusions from the models we use.

By plugging in probability distributions or confidence windows, rather than individual estimates, for the values of the parameters in a given model, we can calculate an output for the model that also reflects uncertainty. However, it is important to be careful when performing such calculations, since small mathematical or conceptual errors can easily lead to incorrect or misleading results. A good tool for avoiding these sorts of errors is  Guesstimate.[1]

It has also been argued, e.g. by Holden Karnofsky, that in cases with high uncertainty, estimates that assign an intervention a very high expected value are likely to reflect some unseen bias in the calculation, and should therefore be treated with skepticism.

...

# Posts tagged Model uncertainty

Relevance
17
· 7y ago · 34m read
10
10
80
· 3y ago · 11m read
131
· 2y ago · 30m read
15
· 3y ago · 3m read
7
7
127
· 1y ago · 43m read
3
3
48
· 6mo ago · 25m read
3
3
119
· 1y ago · 3m read
112
· 1y ago · 46m read
75
2
35
jh
· 2y ago · 1m read
32
· 6mo ago · 6m read
2
2
17
· 7y ago · 18m read
2
2
11
· 6mo ago · 2m read
2
2
130
· 2y ago · 8m read
98
· 9mo ago · 25m read