In sketch, the challenge of consequentialist cluelessness is the consequences of our actions ramify far into the future (and thus - at least at first glance - far beyond our epistemic access). Although we know little about them, we know enough to believe it unlikely these unknown consequences will neatly ‘cancel out’ to neutrality - indeed, they are likely to prove more significant than those we can assess. How, then, can we judge some actions to be better than others?
For example (which we shall return to), even if we can confidently predict the short-run impact of donations to the Against Malaria Foundation are better than donations to Make-a-Wish, the final consequences depend on a host of recondite matters (e.g. Does reducing child mortality increase or decrease population size? What effect does a larger population have on (among others) economic growth, scientific output, social stability? What effect do these have on the margin in terms of how the future of humankind goes?) In aggregate, these effects are extremely unlikely to neatly cancel out, and their balance will likely be much larger than the short run effects.
Hillary Greaves, in presenting this issue (Greaves 2016), notes that there is an orthodox subjective Bayesian ‘answer’ to it: in essence, one should offer (precise) estimates for all the potential long-term ramifications, ‘run the numbers’ to give an estimate of the total value, and pick the option that is best in expectation. Calling this the ‘sceptical response’, she writes (p.12):
This sceptical response may in the end be the correct one. But since it at least appears that something deeper is going on in cases like the one discussed in section 5 [‘complex cluelessness’], it is worth exploring alternatives to the sceptical response.
Andreas Mogensen, in subsequent work (Mogensen 2019), goes further, suggesting this is a ‘Naive Response’. Our profound uncertainty across all the considerations which bear upon the choice-worthiness of AMF cannot be reasonably summarized into a precise value or probability distribution. Both Greaves and Mogensen explore imprecise credences as an alternative approach to this uncertainty.
I agree with both Greaves and Mogensen there’s a deeper issue here, and an orthodox reply along the lines of, “So be it, I assign numbers to all these recondite issues”, is too glib. Yet I do not think imprecise credences are the best approach to tackle this problem. Also, I do not think the challenge of cluelessness relies on imprecise credences: even if we knew for sure our credences should be precise, the sense of a deeper problem still remains.
I propose a different approach, taking inspiration from Amanda Askell and Phil Trammell, where we use credal fragility, rather than imprecision, to address decision making with profound uncertainty. We can frame issues of ‘simple cluelessness’ (e.g. innocuous variations on our actions which scramble huge numbers of future conceptions) as considerations where we are resiliently uncertain of their effect, and so reasonably discount them as issues to investigate further to improve our best guess (which is commonly, but not necessarily, equipoise). By contrast, complex cluelessness are just those cases where the credences we assign to the considerations which determine the long-run value of our actions are fragile, and we have reasonable prospects to make our best guess better.
Such considerations seem crucial to investigate further: even though it may be even harder to (for example) work out the impact of population size on technological progress than it is the effect size of AMF’s efforts on child mortality, the much greater impact of the former than the latter on the total impact of AMF donations makes this topic the better target of investigation on the margin to improve our estimate of the value of donating to AMF.
My exploration of this approach suggests it has some attractive dividends: it preserves features of orthodox theory most find desirable, avoids the costs of imprecise credences, and - I think - articulates well the core problem of cluelessness which Greaves, Mogensen, myself, and others perceive. Many of the considerations regarding the influence we can have on the deep future seem extremely hard, but not totally intractable, to investigate. Offering naive guestimates for these, whilst lavishing effort to investigate easier but less consequential issues, is a grave mistake. The EA community has likely erred in this direction.
On (and mostly contra) imprecise credences
Rather than a single probability function which will give precise credences, Greaves and Mogensen suggest the approach of using a set of probability functions (a representor). Instead of a single credence for some proposition p, we instead get a set of credences, arising from each probability function within the representor.
Although Greaves suggests imprecise credences as an ‘alternative line’ to orthadox subjective Bayesianism, Mogensen offers a stronger recommendation of the imprecise approach over the ‘just take the expected value’ approach, which he deems a naive response (p. 6):
I call this ‘the Naïve Response’ because it is natural to object that it fails to take seriously the depth of our uncertainty. Not only do we not have evidence of a kind that allows us to know the total consequences of our actions, we seem often to lack evidence of a kind that warrants assigning precise probabilities to relevant states. Consider, for example, the various sources of uncertainty about the indirect effects of saving lives by distributing anti-malarial bed-nets noted by Greaves (2016). We have reason to expect that saving lives in this way will have various indirect effects related to population size. We have some reason to think that the effect will be to increase the future population, but also some reason to think that it will be to decrease the net population (Roodman 2014; Shelton 2014). It is not clear how to weigh up these reasons. It is even harder to compare the relative strength of the reasons for believing that increasing the population is desirable on balance against those that support believing that population decrease is desirable at the margin. That the distribution of bed-nets is funded by private donors as opposed to the local public health institutions may also have indirect political consequences that are hard to assess via the tools favoured by the evidence-based policy movement (Clough 2015). To suppose that our uncertainty about the indirect effects of distributing anti-malarial bed-nets can be summarized in terms of a perfectly precise probability distribution over the relevant states seems to radically understate the depth of our uncertainty.
I fear I am too naive to be moved by Mogensen’s appeal. By my lights, although any of the topics he mentions are formidably complicated, contemplating them, weighing up the considerations that bear upon them, and offering a (sharp) ‘best guess’ alongside the rider that I am deeply uncertain (whether cashed out in terms of resilience or 'standard error' - cf., and more later) does not seem to pretend false precision nor do any injustice to how uncertain I should be.
Trading intuitions offers little: it is commonplace in philosophy for some to be indifferent to a motivation others find compelling. I’m not up to the task of giving a good account of the de/merits of im/precise approaches (see eg. here). Yet I can say more on why imprecise credences poorly articulate the phenomenology of this shared sense of uncertainty - and more importantly, seem to fare poorly as means to aid decision-making: the application of imprecise approaches gives results (generally of excessive scepticism) which seem inappropriate.
Unlike ‘straight ticket expected value maximization’, decision rules on representors tend to permit incomparability: if elements within one’s representor disagree on which option is better, it typically offers no overall answer. Mogensen’s account of one such decision rule (the ‘maximality rule’) notes this directly:
When there is no consensual ranking of a and a`, the agent’s preference with respect to these options is indeterminate: she neither prefers a to a`, nor a` to a, nor does she regard them as equally good.
I think advocates of imprecise credences consider this a feature rather than a bug, but I do not think I am alone in having the opposite impression. The articulation would be something along the lines of decision rules on representors tending ‘radically anti-majoritarian’: a single member of one’s credal committee is enough to ‘veto’ a comparison of a versus a`. This doesn’t seem the right standard in cases of consequentialist cluelessness where one wants to make judgements like ‘on balance better in expectation’.
This largely bottoms out in the phenomenology of uncertainty: if, in fact, one’s uncertainty is represented by a set of credence functions, and, in fact, one has no steers on the relative plausibility of the elements of this set compared to one another, then responding with indeterminacy when there is no consensus across all elements seems a rational response (i.e. even if the majority of my representor favours action a over alternative action a`, without steers on how to weigh a-favouring elements over others, there seems little room to surpass a ‘worst-case performance’ criterion like the maximality rule).
Yet I aver that in most-to-all cases (including those of consequentialist cluelessness) we do, in fact, have steers about the relative plausibility of different elements. I may think P(rain tomorrow) could be 0.05, and it could be 0.7 (and other things besides), yet I also have impressions (albeit imprecise) on one being more reasonable than the other. The urge of the orthodox approach is we do better trying to knit these imprecise impressions into a distribution to weigh the spectrum of our uncertainty - even though it is an imperfect representation, rather than deploying representors which often unravel anything downstream of them as indeterminate.
To further motivate, this (by my lights, costly) incomparability may prove pervasive and recalcitrant in cases of consequentialist cluelessness, for reasons related to the challenge of belief inertia.
A classical example of belief inertia goes like this: suppose a coin of unknown bias. It seems rationality permissible for one’s representor on the probability of said coin landing heads to be (0,1). Suppose one starts flipping this coin. No matter the number of coin flips (and how many land heads), the representor on the posterior seems stuck to (0,1): for any element in this posterior representor, for any given sequence of observations, we can find an element in the prior representor which would update to it.
This problem is worse for propositions where we anticipate receiving very limited further evidence. Suppose (e.g.) we take ourselves to be deeply uncertain on the proposition “AMF increases population growth”. Following the spirit of imprecise approach, we offer a representor with at least one element either side of 0.5. Suppose one study, then several more, then a systematic review emerges which all find that AMF lives saved do translate into increased population growth. There’s no guarantee all elements of our representor, on this data, will rise above 0.5 - it looks permissible for us to have included in our representor a credence function which would not do this (studies and systematic reviews are hardly infallible). If this proposition lies upstream of enough of the expected impact, such a representor entails we will never arrive at an answer as to whether donations to AMF are better than nothing.
For many of the other propositions subject to cluelessness (e.g. “A larger population increases existential risk”) we can only hope to acquire much weaker evidence than sketched above. Credence functions that can remain resiliently ‘one side or the other’ of 0.5 in the face of this evidence again seems at least permissible (if not reasonable) to include in our representor. Yet doing so makes for pervasive and persistent incomparability: including a few mildly stubborn credence functions in some judiciously chosen representors can entail effective altruism from the longtermist perspective is a fool’s errand. Yet this seems false - or, at least, if it is true, it is not true for this reason.
Through a representor darkly
A related challenge is we have very murky access to what our representor either is or should be. A given state of (imprecise) uncertainty could be plausibly described by very large numbers of candidate representors. As decision rules on representors tend to be exquisitely sensitive to which elements they contain, it may be commonplace where (e.g.) action a is recommended over a` given representor R, but we can counter-propose R`, no less reasonable by our dim lights of introspective access, which hold a and a` to be incomparable.
All the replies here look costly to me. One could ‘go meta’ and apply the decision rules to the superset of all credence functions that are a member of at least one admissible representor (or perhaps devise some approach to aggregate across a family of representors), but this seems likely to amplify the problems of incomparability and murky access that apply to the composition of a single representor. Perhaps theory will be able to offer tools to supplement internal plausibility to assist us in picking the ‘right’ representor (although this seems particularly far off for ‘natural language’ propositions cluelessness tends to concern). Perhaps we can work backwards from our intuitions about when actions should be incomparable or not to inform what our representor should look like, although reliance on working backwards like this raises the question as to what value - at least prudentially - imprecise credences have as a response to uncertainty.
Another alternative for the imprecise approach is to go on the offensive: orthodoxy faces a similar problem. Any given sharp representation I offer to represent my uncertainty is also susceptible to counter-proposals which will seem similarly appropriate, and in some of these cases the overall judgement will prove sensitive to which representation I use. Yet although a similar problem, it is much less in degree: even heavy-tailed distributions are much less sensitive to ‘outliers’ than representors, and orthodox approaches have more resources available to aggregate and judge between a family of precise representations.
Minimally clueless forecasting
A natural test for approaches to uncertainty is to judge them by their results. For consequentialist cluelessness, this is impossible: there is no known ground truth of long-run consequences to judge against. Yet we can assess performance in nearby domains, and I believe this assessment can be adduced in favour of orthodox approaches versus imprecise ones.
Consider a geopolitical forecasting question, such as: “Before 1 January 2019, will any other EU member state [besides the UK] schedule a referendum on leaving the EU or the eurozone?” This question opened on the Good Judgement Open on 16 Dec 2017. From more than a year out, there would seem plenty to recommend imprecision: there would be 27 countries, each with their complex political context, several of whom with elections during this period, and a year is a long time in politics. Given all of this, would an imprecise approach not urge us it is unwise to give a (sharp) estimate of the likelihood of this event occurring within a year?
Yet (despite remaining all-but-ignorant of these particulars) I still offered an estimate of 10% on Dec 17. As this didn’t happen, my Brier score was a (pretty good) 0.02 - although still fractionally worse than the median forecaster for this question (0.019). If we instrumented this with a bet (e.g. “Is it better for me to bet donation money on this not occuring at a given odds?”), I would fare better than my “imprecise/incomparible” counterpart. Depending on how wide their representor was, they could not say taking a bet at evens or 5:1 (etc.) was better than not doing so. By tortured poker analogy, my counterpart’s approach leads them to play much too tightly, leaving value on the table an orthodox approach can reliably harvest.
Some pre-emptive replies:
First, showing a case where I (or the median) forecaster got it right means little: my counterpart may leave ‘easy money’ on the table when the precisification was right, yet not get cleaned out when the precisification was wrong. Yet across my forecasts (on topics including legislation in particular countries, election results, whether people remain in office, and property prices - _all _of which I know very little about), I do somewhat better than the median forecaster, and substantially better than chance (Brier ~ 0.23). Crucially, the median forecaster also almost always does better than chance too (~ 0.32 for those who answered the same questions as I) - which seems the analogous consideration for cluelessness given our interest is in objective rather than relative accuracy. That the imprecise/incomparible approach won’t recommend taking ‘easy money’ seems to be a general trend rather than a particular case.
Second, these uncertainties are ‘easier’ than the examples of consequentialist cluelessness. Yet I believe the analogy is fairly robust, particularly with respect to ‘snap judgements’. There are obvious things one can investigate to get a better handle on the referendum question above: one could look at the various political parties standing in each country, see which wanted to hold such a referendum, and look at their current political standing (and whether this was likely to change over the year). My imprecise/incomparible counterpart, on consulting these, could winnow their representor so (by the maximality rule) they may be recommended to take rather than refrain even-odds or tighter bets.
Yet I threw out an estimate without doing any of these (and I suspect the median forecaster was not much more diligent). Without this, there seems much to recommend including in my representor at least one element with P>0.5 (e.g. “I don’t know much about the Front Nationale in France, but they seem a party who may want to leave the EU, and one which has had political success, and I don’t know when the next presidential election is - and what about all the other countries I know even less about?”). As best as I can tell, these snap judgements (especially from superforecasters, but also from less trained or untrained individuals) still comfortably beat chance.
Third, these geopolitical forecasting questions generally have accessible base rates or comparison classes (the reason I threw out 10% the day after the referendum question opened was mostly this). Not so for consequentialist cluelessness - all of the questions about trends in long-term consequences are yet to resolve, and so we have no comparison class to rely on for (say) whether greater wealth helps the future go better or otherwise. Maybe orthodox approaches are fine when we can moor ourselves to track records, but they are inadequate when we cannot and have to resort to extrapolation fuelled by analogy and speculation - such as cases of consequentialist cluelessness.
Yet in practice forecasting often takes one far afield from accessible base rates. Suppose one is trying to work out (in early 2019) whether Mulvaney will last the year as White House Chief of staff. One can assess turnover of white house staff, even turnover of staff in the Trump administration - but how to balance these to Mulvaney’s case in particular (cf. ‘reference class tennis’)? Further suppose one gets new information (e.g. media rumours of him being on ‘shaky ground’): this should change one’s forecast, but by how much? (from 10% to 11%, or to 67%?) There seems to be a substantially similar ‘extrapolation step’ here.
In sum: when made concrete, the challenges of geopolitical forecasting are similar to those in consequentialist cluelessness. In both:
- The information we have is only connected to what we care about through a chain of very uncertain inferences (e.g. “This report suggests that Trump is unhappy with Mulvaney, but what degree of unhappiness should we infer given this is filtered to us from anonymous sources routed through a media report? And at what rate of exchange should this nebulous ‘unhappiness’ be cashed out into probability of being dismissed?”).
- There are multiple such chains (roughly corresponding to considerations which bear upon the conclusion), which messily interact with one another (e.g. “This was allegedly prompted by the impeachment hearing, so perhaps we should think Trump is especially likely to react given this speaks to Mulvaney’s competence at an issue which is key to Trump - but maybe impeachment itself will distract or deter Trump from staff changes?”)
- There could be further considerations we are missing which are more important than those identified.
- Nonetheless, we aggregate all of these considerations to give an all-things considered crisp expectation.
Empirically, with forecasting, people are not clueless. When they respond to pervasive uncertainty with precision, their crisp estimates are better than chance, and when they update (from one crisp value to another) based on further information (however equivocal and uncertain it may be) their accuracy tends to improve. Cases of consequentialist cluelessness may differ in degree, but not (as best as I can tell) in kind. In the same way our track record of better-than-chance performance warrants us to believe our guesses on hard geopolitical forecasts, it also warrants us to believe a similar cognitive process will give ‘better than nothing’ guesses on which actions tend to be better than others, as the challenges are similar between both.
Credal resilience and ‘value of contemplation’
Suppose I, on seeing the evidence AMF is one of the most effective charities (as best as I can tell) for saving lives, resolve to make a donation to it. Andreas catches me before I make the fateful click and illustrates all the downstream stakes I, as a longtermist consequentialist, should also contemplate - as these could make my act of donation much better or much worse than I first supposed.
A reply along the lines of, “I have no idea what to think about these things, so let’s assume they add up to nothing” is unwise. It is unwise as (accord Mogensen and others) I cannot simply ‘assume away’ considerations on which the choiceworthiness of my action is sensitive to. I have to at least think about them.
Suppose instead I think about it for a few minutes and reply, “On contemplating these issues (and the matter more generally), my best guess is the downstream consequences in aggregate are in perfect equipoise - although I admit I am highly uncertain on this - thus the best guess for the expected value of my donation remains the same.”
This is also unwise, but for a different reason. The error is not (pace cluelessness) on giving a crisp estimate for a matter I should still be profoundly uncertain about: if I hopped into a time machine and spent a thousand years mulling these downstream consequences over, and made the same estimate (and was no less uncertain about it); or for some reason I could only think about it for those few minutes, I should go with my all-things-considered best guess. Yet the error is that in these circumstances, it seems I should be thinking about these downsteam consequences for much longer than a few minutes.
This rephrases ‘value of information’ (perhaps better ‘contemplation’). Why this seems acutely relevant here is precisely the motivation for consequentialist cluelessness: when the overall choiceworthiness of our action proves very sensitive to a considerations we are very uncertain about, the expected marginal benefit of reducing our uncertainty here will often be a better use of our efforts than acting on our best guess.
Yet in making wise decisions about how to allocate time to contemplate things further, we should factor in ‘tractability’. Some propositions, although high-stakes, might be those in which we are resiliently uncertain, and so effort trying to improve our guesswork is poorly spent. Others, where our credences are fragile (‘non-resiliently uncertain’) are more worthwhile targets for investigation.
I take the distinction between ‘simple’ and ‘complex’ cluelessness to be mainly this. Although vast consequences could hang in the balance with trivial acts (e.g. whether I click ‘donate now’ now or slightly later could be the archetypal ‘butterfly wing’ which generates a hurricane, or how trivial variations in behaviour might change the identity of a future child - and then many more as this different child and their descendants ‘scramble’ the sensitive identity-determining factors of other conceptions), we rule out further contemplation of ‘simple cluelessness’ because it is intractable. We know enough to predict that the ex post ‘hurricane minimizing trivial movements’ won’t be approximated by simple rules (“By and large, try and push the air molecules upwards”), but instead exquisitely particularized in a manner we could never hope to find out ex ante. Likewise, for a given set of ‘nearby conceptions’ to the actual one, we know enough to reasonably believe we will never have steers on which elements are better, or how - far causally upstream - we can send ripples in just the right way to tilt the odds in favour of better results.
Yet the world is not only causal chaos. The approximate present can approximately determine the future: physics allows us to confidently predict a plane will fly without modelling to arbitrary precision all the air molecules of a given flight. Similarly, we might discover moral ‘levers on the future’ which, when pulled in the right direction, systematically tend to make the world better rather than worse in the long run.
‘Complex cluelessness’ draws our attention to such levers, where we know little about which direction they are best pulled, but (crucially) it seems we can improve our guesswork. Whether (say) economic growth is good in the long-run is a formidably difficult question to tackle. Yet although it is harder than (say) how many child deaths a given antimalarial net distributions can be expected to avert, it is not as hopelessly intractable as the best way to move my body to minimize the expected number of hurricanes. So even though the expected information per unit effort on the long-run impacts of economic growth will be lower than evaluating a charity like AMF, the much greater importance of his consideration (both in terms of its importance and its wide applicability) makes this the better investment of our attention.
Our insight into the present is poor, and deteriorates inexorably as we try to scry further and further into the future. Yet longtermist consequentialism urges us to evaluate our actions based on how we forecast this future to differ. Although we know little, we know enough that our actions are apt to have important long term ramifications, but we know very little about what those will precisely be, or how they will precisely matter. What ought we to do?
Yet although we know little, we are not totally clueless. I am confident the immediate consequences of donations to AMF are better than those to make a wish. I also believe the consequences simpliciter are better for AMF donations than Make-a-wish donations, although this depends very little on the RCT-backed evidence base, and much more on a weighted aggregate of poorly-educated guesswork on ‘longtermist’ considerations (e.g. that communities with fewer child deaths fare better in naive terms, and the most notable dividends of this with longterm relevance - such as economic growth in poorer countries - systematically tend to push the longterm future in a better direction) which I am much less confident in.
This guesswork, although uncertain and fragile, nonetheless warrants my belief that AMF-donations are better than Make-a-wish ones. The ‘standard of proof’ for consequentialist decision making is not ‘beyond a reasonable doubt’, but a balance of probabilities, and no matter how lightly I weigh my judgement aggregating across all these recondite matters, it is sufficient to tip the scales if I have nothing else to go on. Withholding judgement will do worse if, to any degree, my tentative guesswork tends towards the truth. If I had a gun to head to allocate money to one or the other right now, I shouldn’t flip a coin.
Without the gun, another option I have is to try and improve my guesswork. To best improve my guesswork, I should allocate my thinking time to uncertainties which have the highest yield - loosely, a multiple of their ‘scale’ (how big an influence they play on the total value) and ‘tractability’ (how much I can improve my guess per unit effort).
Some uncertainties, typically those of ‘simple’ cluelessness, score zero by the latter criterion: I can see from the start I will not find ways to intervene on causal chaos to make it (in expectation) better, and so I leave them as high variance (but expectation neutral) terms which I am indifferent towards. If I were clairvoyant on these matters, I expect I would act very differently, but I know I can’t get any closer than I already am.
Yet others, those of complex cluelessness, do not score zero on ‘tractability’. My credence in “economic growth in poorer countries is good for the longterm future” is fragile: if I spent an hour (or a week, or a decade) mulling it over, I would expect my central estimate to change, although my remaining uncertainty to be only a little reduced. Given this consideration has much greater impact on what I ultimately care about, time spent on this looks better than time further improving the estimate of immediate impacts like ‘number of children saved’. It would be unwise to continue delving into the latter at the expense of the former. Would that we do otherwise.
Thanks to Andreas Mogensen, Carl Shulman, and Phil Trammel for helpful comments and corrections. Whatever merits this piece has are mainly owed to their insight. It's failures are owed to me (and many to lacking time and ability to address their criticism better).
 Obviously, the impact of population size on technological progress also applies to many other questions besides the utility of AMF donations.
 (Owed to/inspired by Mogensen) One issue under the hood here is whether precision is descriptively accurate or pragmatically valuable. Even if (in fact) we cannot describe our own uncertainty as (e.g.) a distribution to arbitrary precision, we fare better if we act as-if this were the case (the opposite is also possible). My principal interest is the pragmatic one: that agents like ourselves make better decisions by attempting to EV-maximization with precisification than they would with imprecise approaches.
 One appeal I see is the intuition that for at least some deeply mysterious propositions, we should be rationally within our rights to say, “No, really, I don’t have any idea”, rather than - per the orthodox approach - taking oneself to be rationally obliged to offer a distribution or summary measure to arbitrary precision.︎
 This point, like many of the other (good) ones, I owe to conversation with Phil Trammell.
 And orthodox approaches can provide further resources to grapple with this problem: we can perform sensitivity analyses with respect to factors implied by our putative distribution. In cases where we get a similar answer ‘almost all ways you (reasonably) slice it’, we can be more confident in making our decision in the teeth of our uncertainty (and vice versa). Similarly, this quantitative exercise in setting distributions and central measures can be useful for approaching reflective equilibrium on the various propositions which bear upon a given conclusion.︎
 One might quibble we might have reasons to rule out extremely strong biases (e.g. P(heads) = 10^-20), but nothing important turns on this point.
 Again the ‘engine’ of this problem is the indeterminate weights on the elements in the set. Orthodoxy has also to concede that, given a prior ranging between (0,1) for bias, given any given sequence of heads and tails, any degree of bias (towards heads or tails) is still possible. Yet, as the prior expressed the relative plausibility of different elements, it can say which values(/intervals) have gotten more or less likely. Shorn of this, it is also inert.
 I don’t think the picture substantially changes if the proposition changes from “Does a larger population increase existential risk” to “What is the effect size of a larger population on existential risk?”
 Winnowing our representor based on its resilience to the amount of evidence we can expect to receive looks ad hoc. ︎
 A further formidable challenge is how to address ‘nested’ imprecision: what to do if we believe we should be imprecise about P(A) and P(B|A). One family of cases that spring to mind is moral uncertainty: our balance of credence across candidate moral theories, and (conditional on a given theory) which option is the best will seem to be things that ‘fit the bill’ for an imprecise approach. Naive approaches seem doomed to incomparability, as the total range of choice-worthiness will inevitably expand the longer the conditional chain - and, without density, we seem stuck in the commonplace where these ranges overlap.
 Earlier remarks on the challenge of mildly resilient representor elements provide further motivation. Consider Morgensen’s account of how imprecision and maximality will not lead to a preference to AMF over Make-A-Wish given available evidence. After elaborating the profound depth of our uncertainty around the long-run impacts of AMF, he writes (p. 16-17):
[I]t was intended to render plausible the view that the evidence is sufficiently ambiguous that the probability values assigned by the functions in the representor of a rational agent to the various hypotheses that impact on the long-run impact of her donations ought to be sufficiently spread out that some probability function in her representor assigns greater expected moral value to donating to the Make-A-Wish Foundation.
Proving definitively that your credences must be so spread out as to include a probability function of a certain kind is therefore not in general within our powers. Accordingly, my argument rests in large part on an appeal to intuition. But the intuition to which I am appealing strikes me as sufficiently forceful and sufficiently widely shared that we should consider the burden of proof to fall on those who deny it.
Granting the relevant antecedents, I share this intuition. Yet how this intuitive representor should update in response to further evidence is mysterious to me. I can imagine some that would no longer remain ‘sufficiently spread out’ if someone took a month or so to consider all the recondite issues Morgensen raises, and concludes, all things considered, that AMF has better overall long-run impact than Make-A-Wish; I can imagine others that would remain ‘sufficiently spread out’ even in the face of person-centuries of work on each issue raised. This (implicit) range spans multitudes of candidate representors.
 Morgensen notes (fn.4) the possibility a representor could be a fuzzy set (whereby membership is degree-valued), which could be a useful resource here. One potential worry is this invites an orthodox-style approach: one could weigh each element by its degree of membership, and aggregate across them.
 Although recursive criticisms are somewhat of a philosophical ‘cheap shot’, I note that “What representor should I use for p?” seems (for many ‘p’s) the sort of question which the motivations of imprecise credences (e.g. the depth of our uncertainty, (very) imperfect epistemic access) recommend an imprecise answer.
 I think what cost me was I didn’t update this forecast as time passed (and so the event became increasingly more unlikely). In market terms, I set a good price on Dec 17, but I kept trading at this price for the rest of the year.
 Granted, the imprecise/incomparable approach is not recommending refraining from the bet, but saying it has no answer as to whether taking the bet or not doing so is the better option. Yet I urge one should take these bets, and so this approach fails to give the right answer.
 We see the same result with studies on wholly untrained cohorts (e.g. undergraduates). E.g. Mellers et al. (2017).
 In conversation, one superforecaster I spoke to suggested they take around an hour to give an initial forecast on a question: far too little time to address the recondite matters that bear upon a typical forecasting question.
 Cf. The apocryphal remark about it being ‘too early to tell’ what the impact of the French Revolution was.
 For space I only illustrate for my example - Morgensen (2019) ably explains the parallel issues in the ‘AMF vs. Make-a-wish’ case.
 Although this is a weak consideration, I note approaches ‘in the spirit’ of an imprecise/incomparible approach, when applied to geopolitical forecasting, are associated with worse performance: Rounding estimates (e.g. from a percentage to n bands across the same interval) degrades accuracy - especially for the most able forecasters; those who commit to making (precise) forecasts and ‘keeping score’ improve more than those who do less of this. Cf. Tetlock’s fireside chat:
Some people would say to us, “These are unique events. There's no way you're going to be able to put probabilities on such things. You need distributions. It’s not going to work.” If you adopt that attitude, it doesn't really matter how high your fluid intelligence is. You're not going to be able to get better at forecasting, because you're not going to take it seriously. You're not going to try. You have to be willing to give it a shot and say, “You know what? I think I'm going to put some mental effort into converting my vague hunches into probability judgments. I'm going to keep track of my scores, and I'm going to see whether I gradually get better at it.” The people who persisted tended to become superforecasters. ︎
 Skipping ahead, Trammell (2019) also sketches a similar account to ‘value of contemplation’ which I go on to graffiti more poorly below (p 6-7):
[W]e should ponder until it no longer feels right to ponder, and then to choose one of the the acts it feels most right to choose. Lest that advice seem as vacuous as “date the person who maximizes utility”, here is a more concrete implication. If pondering comes at a cost, we should ponder only if it seems that we will be able to separate better options from worse options quickly enough to warrant the pondering—and this may take some time. Otherwise, we should choose immediately. When we do, we will be choosing literally at random; but if we choose after a period of pondering that has not yet clearly separated better from worse, we will also be choosing literally at random.
The standard Bayesian model suggests that if we at least take a second to write down immediate, baseless “expected utility” numbers for soap and pasta, these will pick the better option at least slightly more often than random. The cluelessness model sketched above predicts (a falsifiable prediction!) that there is some period—sometimes thousandths of a second, but perhaps sometimes thousands of years—during which these guesses will perform no better than random.
I take the evidence from forecasting to give evidence that orthodoxy can meet the challenge posed in the second paragraph. As it seems people (especially the most able) can make ‘rapid fire’ guesses on very recondite matters that are better than chance, we can argue by analogy that the period before guesswork manages to tend better than chance, even for the hard questions of complex cluelessness, tends to be more towards the ‘thousands of a second’ than ‘thousands of years’.
 In conversation Phil Trammell persuades me it isn’t strictly information in the classic sense of VoI which is missing. Although I am clueless with respect to poker, professional players aren’t, and yet VoI is commonplace in their decision making (e.g. factoring in whether they want to ‘see more cards’ in terms of how aggressively to bet). Contrariwise I may be clueless whilst having all the relevant facts, if I don’t know how to ‘put them together’.
 Naturally, our initial impressions here might be mistaken, but can be informed by the success of our attempts to investigate (cf. multi-armed bandit problems).
 Perhaps one diagnostic for ‘simple cluelessness’ could be this sensitivity to ‘causal jitter’.
 An aside on generality and decomposition - relegated to a footnote as the natural responses seem sufficient to me: We might face a trade-off between broader and narrower questions on the same topic: is economic growth in country X a good thing versus economic growth generally, for example. In deciding to focus on one or the other we should weigh up their relative tractability (on its face, perhaps the former question is easier) applicability to the decision (facially, if the intervention is in country X, the more particular question is more important to answer - even if economic growth was generally good or bad, country X could be an exception), and generalisability (the latter question offers us a useful steer for other decisions we might have to make).
 If it turned out it was hopelessly intractable after all, that, despite appearances, there are not trends or levers - or perhaps we find there are many such levers connected to one another in an apparently endless ladder of plausibly sign-inverting crucial considerations - then I think we can make the same reply (roughly indifference) that we make to matters of simple cluelessness.