(Drafted by Sonnet 4.6)
Tags: Decision theory | Cluelessness | Longtermism | Epistemology | Economics
Summary
Hilary Greaves' work on cluelessness identifies a genuine and serious problem for EA decision-making: in cases of complex cluelessness, we cannot assign well-defined probabilities to the long-run consequences of our actions, and standard expected-value reasoning breaks down. Her proposed responses — imprecise credences, shifting to explicitly longtermist interventions — draw almost entirely from analytic philosophy and formal decision theory.
I want to suggest that there is a well-developed body of literature that has been grappling with essentially the same problem for over a century, and which Greaves' analysis appears not to engage: Post-Keynesian (PK) economics, and the tradition of thought on fundamental uncertainty that runs from Keynes' Treatise on Probability (1921) through Shackle, Davidson, and Minsky to the present day. This literature has produced not just a diagnosis of the problem but a set of practical heuristics and institutional responses that could meaningfully supplement EA analysis in situations of deep uncertainty.
This post is not a criticism of Greaves — her paper is excellent and the problem she identifies is real. It is a suggestion that EA's engagement with cluelessness has drawn from too narrow an intellectual tradition, and that PK economics offers genuinely useful tools the community has so far missed.
Greaves' Cluelessness Problem, Briefly
Greaves distinguishes between two types of cluelessness:
Simple cluelessness arises when our actions have long-run effects we cannot predict, but where those effects are plausibly symmetric across our available choices — the downstream effects of helping or not helping an old lady across the street scramble future population composition in ways we cannot track, but there is no principled reason to think one option is systematically better than the other. Greaves argues this type of cluelessness is relatively benign: indifference-based reasoning can handle it.
Complex cluelessness is the harder problem. It arises when we face interventions whose long-run effects we cannot predict, but where those effects are not symmetric — where we have some reason to think the signs and magnitudes of long-run consequences differ across our options, but we cannot assign coherent probabilities to those differences. The orthodox Bayesian response — assign precise credences to all possible consequences and maximise expected value — strikes Greaves as "too glib" here. The problem is not that we lack data; it is that the structure of the uncertainty itself resists probabilistic representation.
Greaves notes that the orthodox subjective Bayesian answer — offering precise estimates for all long-term ramifications and "running the numbers" — may in the end be the correct response, but that "something deeper is going on" in cases of complex cluelessness that makes it worth exploring alternatives. Her proposed alternatives include imprecise credences and a shift toward explicitly longtermist interventions designed to improve the expected course of the far future directly.
This is exactly right as a diagnosis. But the solutions she explores remain within a broadly decision-theoretic, philosophy-of-probability framework. And that is where I think PK economics has something important to add.
The Post-Keynesian Tradition on Fundamental Uncertainty
Keynes distinguished, in his Treatise on Probability (1921) and later in his response to Tinbergen (1939), between situations of risk — where we can assign meaningful numerical probabilities — and situations of fundamental (or true) uncertainty, where no such assignment is possible, not because we lack data, but because the future is genuinely non-ergodic: it is not a draw from a stable, pre-existing probability distribution.
This is not the same as having imprecise credences. For Keynes, and for the PK economists who developed his insight, fundamental uncertainty is not a state of incomplete knowledge that better data could remedy. It is a structural feature of a world in which human decisions, conventions, and institutions are themselves among the causes of future outcomes — meaning the distribution we are trying to sample from is partly constituted by the very beliefs and expectations we hold about it. The future, in this sense, is genuinely open.
Post-Keynesian economists — particularly Paul Davidson, G.L.S. Shackle, and later Hyman Minsky — developed this insight in the context of investment decisions, financial markets, and macroeconomic policy. Their key contributions relevant to EA's cluelessness problem include:
1. The non-ergodicity of social systems. Paul Davidson's work formalises the Keynesian intuition: economic and social systems are non-ergodic — the time-average of a system's behaviour does not converge to its ensemble average, because the system itself changes as a function of agents' decisions and beliefs. This means expected-value reasoning based on historical frequencies is structurally inappropriate in many social domains. The problem Greaves identifies as "complex cluelessness" is, in PK terms, what you always face when acting in a non-ergodic social system.
2. Conventions and coordination as uncertainty management. Keynes argued in the General Theory that agents operating under fundamental uncertainty rely on conventions — particularly the convention of assuming the present state of affairs will continue unless there is specific reason to think otherwise — as practical guides to action. This is not irrational; it is the only workable response to a situation where no probability distribution is available. The convention provides a shared focal point for coordination, which itself has real consequences for outcomes.
3. Shackle on "crucial experiments." G.L.S. Shackle introduced the concept of the crucial experiment — a decision that is non-repeatable, non-reversible, and whose consequences destroy the conditions under which similar decisions could be made again. For Shackle, conventional expected-utility theory is strictly inapplicable to crucial experiments, because the concept of probability requires repeatable trials. Many of the decisions EA cares most about — particularly longtermist interventions aimed at existential risk — are crucial experiments in exactly Shackle's sense.
4. Minsky on institutional buffers. Hyman Minsky's work on financial instability argued that under fundamental uncertainty, the appropriate response is not to find better ways of assigning probabilities but to build institutional structures that limit the downside of being wrong — automatic stabilisers, lender-of-last-resort functions, and circuit-breakers that prevent cascading failures. The logic is: when you cannot know the probability distribution of outcomes, design institutions whose behaviour is good across a wide range of possible distributions.
What PK Economics Offers That Greaves' Framework Misses
Greaves' five proposed responses to cluelessness are: (1) make cost-effectiveness analyses more sophisticated, (2) give up EA, (3) do the "uber-analysis" (model all distant future effects), (4) adopt parochial morality (care only about near-term predictable effects), or (5) shift to explicitly longtermist interventions. She finds none fully satisfying, though option 5 is her tentative preference.
The PK tradition suggests some additional responses that are absent from this list:
Embrace robust decision-making over expected-value maximisation. When facing fundamental uncertainty, PK economists argue for choosing strategies that are robust — that perform reasonably well across a wide range of possible futures — rather than strategies that maximise expected value under a specific probability distribution you cannot defensibly assign. This is distinct from maximin (choosing the option with the best worst-case outcome); it is about identifying interventions whose causal mechanisms are likely to produce good results under many different models of how the world works. This maps naturally onto EA's existing preference for interventions with strong near-term evidence — not as a cop-out from cluelessness but as a principled response to it.
Take conventions seriously as epistemic objects. The PK tradition suggests that widespread social conventions — including broadly shared norms, institutions, and expectations — are not just empirical facts to be modelled but uncertainty-managing devices that deserve to be protected and reinforced where they are beneficial. From an EA perspective, this suggests that interventions which destabilise well-functioning social conventions may be more costly under fundamental uncertainty than expected-value calculations would suggest, because the convention itself was performing work that no agent can easily replicate.
Prioritise reversibility and institutional resilience. Shackle's insight about crucial experiments suggests a strong asymmetry in how EA should weight reversible versus irreversible interventions under fundamental uncertainty. An intervention that, if it goes wrong, can be corrected, is categorically different from one that cannot be — not just in expected value but in the structure of the decision itself. This is a more principled grounding for the broadly held EA intuition about prioritising options that preserve optionality, one that does not rely on assigning probabilities to recovery scenarios.
Attend to the non-ergodicity of the specific systems you are intervening in. Not all social systems are equally non-ergodic. Some domains — well-functioning public health systems, stable legal institutions, mature financial markets — are closer to ergodic than others, and historical frequency data provides more reliable guidance. Others — political systems during regime transitions, emerging technologies, long-run demographic dynamics — are highly non-ergodic, and the application of expected-value reasoning there is structurally suspect regardless of how much data you have. PK economics provides a theoretical framework for distinguishing these cases that is more principled than the intuitive categorisation EA currently uses.
A Suggested Research Agenda
The overlap between PK fundamental uncertainty and EA's cluelessness problem is underexplored on both sides. Some specific questions that seem worth pursuing:
- Can Shackle's framework for crucial experiments be formalised in a way that is compatible with EA's decision-theoretic apparatus, and what implications would this have for how we evaluate longtermist interventions?
- Does the PK concept of conventions provide a more rigorous grounding for EA's "hits-based giving" and "option value" heuristics than the informal justifications currently in use?
- Is there a principled way to identify which EA-relevant domains are sufficiently non-ergodic that standard expected-value reasoning should be abandoned in favour of robustness criteria?
- Can Minsky's institutional buffer approach be translated into a framework for evaluating EA interventions that aim to improve systemic resilience rather than produce specific measurable outcomes?
A Note on Intellectual Silos
I want to be precise about the gap I am claiming exists — and honest about what I have and haven't checked.
Non-ergodicity is not entirely absent from EA discourse. Ole Peters' ergodicity economics has been discussed on this forum in the context of longtermism, and that work raises related concerns about the applicability of expected-value reasoning to non-repeatable decisions. If I have missed other engagements with this territory, I hope commenters will point me to them.
The more specific claim I am making is narrower: the Post-Keynesian tradition in particular — Shackle's crucial experiments, Davidson's formalisation of non-ergodicity in social systems, Minsky's institutional buffer approach — does not appear to have been brought into dialogue with Greaves' cluelessness framework. That specific connection seems worth making, not because EA has ignored uncertainty but because this tradition offers conceptual tools that are distinct from both the Peters ergodicity approach and the analytic philosophy responses Greaves herself considers.
Why might this specific tradition have been missed? Post-Keynesian economics sits outside mainstream economics, and mainstream economics itself sits largely outside the philosophy literature Greaves draws on. EA's intellectual community has deep roots in analytic philosophy, formal decision theory, and mainstream economics — it is not surprising that a tradition developed largely by heterodox macroeconomists writing about investment decisions has not made it onto the reading list. But the gap is not justified by the quality of the ideas. Keynes' original probability framework is sophisticated and was seriously engaged with by the very philosophers — de Finetti, Ramsey — whose work underpins modern Bayesianism. The failure of engagement looks more like a sociological fact about intellectual silos than a verdict on the merits.
I would be genuinely interested to hear from anyone who has engaged with this literature, and in particular whether the PK tradition has been considered and found wanting for reasons I have not addressed here.

Fwiw, DiGiovanni and myself argue that following such heuristics is not an appropriate response to the deep-uncertainty situation EAs (at least impartial ones) face. We don't directly respond to the literature you cite, but rather to the arguments found in the following refs you might be interested in: Thorstad & Mogensen 2020; Tomasik 2015; The Global Priorities Institute 2024, §§1.2.1 and 4.2.1; Grant & Quiggin 2013.
Thanks for posting this :)
Cheers I will take a look. Though I suspect if you are responding to the arguments in these particular papers, and those papers don’t draw on post-keynsianism, I worry they won’t have put forward the strongest possible case.
Keep in mind post-keynsianism has been developing these ideas of how to manage cluelessness for more than 100 years.
Can I suggest perhaps using a chatbot to critique your analysis so that you can see where post-keynsians may agree and disagree with you. To get an idea, this was a summary I was able to generate for DiGiovanni’s post (I promise I will read it later but just wanted to get something back to you quickly):
Post-Keynesians would read DiGiovanni’s post as a talented insider finally arriving at conclusions Keynes/PKs have held since the 20s, but stopping just short of the full break. They’d applaud the demolition of the six EA approaches. They’d say the positive alternative shouldn’t be “imprecise UEV plus near-term welfare” but rather a non-probabilistic framework built around conventions, weight of evidence, option-value preservation, and institutional robustness — and they’d point out that PKs have been developing exactly this toolkit, largely ignored by the EA and rationalist communities, for years.
Further, the article stays entirely inside a probabilistic framework — even its critique of expected value uses expected value concepts (UEV, credences, imprecise probabilities). From a PK view, the deeper missed point is that genuine uncertainty isn’t just hard to quantify, it’s unquantifiable in principle, and no refinement of the probability calculus fixes that.
The other big gap is the absence of a positive theory of action under true uncertainty. Keynes, Shackle, and Davidson all had answers — conventions, weight of evidence, option-value preservation — but DiGiovanni ends up in a kind of paralysis (“suspend judgment”) without drawing on this tradition. PKs would also flag the missing institutional dimension: the article assumes the problem is individual decision-making under uncertainty, when the Keynesian answer was always partly collective — build institutions that reduce the damage uncertainty can do, rather than trying to calculate through it.
The chatbot completely misses DiGiovanni's point fwiw aha. Literally all of the objections it raises are explicitly addressed in what I've linked or elsewhere in his sequence. :)
No pressure to read anything, though. It's a thorny topic and understanding all the complex details takes time.
lol embarrassing. Looks like it didn’t actually read it. Will try again…
Would be very interested to hear from you though, if you do your own analysis of what PK would critique from those posts. (You will be far better placed than me to make sure it doesn’t miss things)
Interesting link of ideas! A few thoughts:
- Nassim Taleb's conceptualisation of antifragility seems related to the institutional buffer.
- Arguing that under the assumption of strong epistemic cluelessness, consequentialist agents depend on Pascal’s wager type scenarios to choose longtermist policies: https://link.springer.com/article/10.1007/s11229-023-04153-y These wagers might take the form of: https://philarchive.org/rec/BALPMS
https://longtermrisk.org/files/Bracketing_Cluelessness.pdf offers an argument for salvaging cluelessness through isolating certain mechanisms of foresight.
Also, Kemp & Cremer in their X-risk paper argue for existential risk to be measured through systemic risk analysis instead of what they call "hazard-centric" risk analysis, as isolating risks (eg CBRN, ASI) may lead to ignoring common contributing factors to the risks. That lens lends itself well to "institutional buffer" - type interventions for minimizing cascade risks.
Yeah it’s honestly quite amazing/impressive to me, that those looking into this seem to have arrived at very similar insights to Post Keynesians on this point, but have done it without reference to that school of thought (which has been around and developing for more than 100 years)
So I guess my theory is that those working on these ideas could find both a lot of common ground but perhaps further insights they may not yet have considered by engaging more directly with post-Keynsian methodology.