TL;DR: Calibration is essential for observers but dangerous for actors. in decision-dependent environments, "accurate" forecasting creates self-fulfilling prophecies of failure. To maximize expected value, you often need to strategically break your own calibration.
Effective Altruism prioritizes calibration, using metrics like Brier scores to ensure our internal maps match the territory. This "Scout Mindset" is the gold standard for Observers—those predicting events they cannot influence, like elections or weather patterns. However, this framework fails for Actors, such as founders or wartime leaders, whose actions directly alter the probability of the outcome. For an Observer, a prediction is a statistic; for an Actor, it is an intervention.
This distinction creates a trap in high-stakes scenarios. Consider a leader undertaking a project with a base rate of success of 20%. A passive forecaster would accurately predict this low probability and act conservatively. Consequently, allies and investors would sense this low confidence, withhold resources, and the actual probability of success would collapse to zero. By striving for informational accuracy, the leader guarantees failure. To avoid this self-fulfilling prophecy, a Rational Actor must often signal absolute confidence, diverging from the "true" probability not out of delusion, but as a structural necessity to coordinate others.
Escaping this low-probability equilibrium requires more than just increased effort, which typically yields diminishing returns. It requires non-linear moves that shift the parameters of the game itself. Our model identifies Strategic Surprise as a necessary "shock"—a discontinuous action that appears irrational on a standard cost-benefit curve but is required to reset the odds. Similarly, actors must employ Theatrical Indignation, authenticating their signal by amplifying a genuine principle until it becomes reputationaly costly to back down. This "burning of the boats" forces the probability of success upward by removing the option of failure.
The critical distinction lies in the elasticity of the outcome. If a result is insensitive to effort—a "Quagmire"—then overconfidence is merely waste, and one should remain a Scout. But if the outcome depends on coordination, morale, or funding, then "accurate" forecasting is fatal. In these "Pivotal Moments," we must stop conflating informational accuracy with instrumental impact. If you want to predict the future, calibrate; if you want to change it, you must rationally overreact.
Here is a link to the mathematical paper that provides game theoretic proofs: https://www.bloomsburytech.com/whitepapers
We are building Bloomsbury Tech, a causal AI lab for alternative investments! Email me on eugene.shcherbinin@bloomsburytech.com if you're interested.
