I want to find a good thought experiment that makes us appreciate how radically uncertain we should be about the very long-term effects of some actions altruistically motivated actors might take. Some have already been proposed in the 'cluelessness' literature -- a nice overview of which is given by Tarsney et al. (2024, §3)-- but I don't find them ideal, as I'll briefly suggest. So let me propose a new one. Call it the 'Dog vs Cat' dilemma:
Say you are a philanthropy advisor reputed for being unusually good at forecasting various direct and indirect effects donations to different causes can have. You are approached by a billionaire with a deep love for companion animals who wants to donate almost all his wealth to animal shelters. He asks you whether he should donate to dog shelters around the World or to cat shelters instead.[1] Despite the relatively narrow set of options he is considering, he importantly specifies that he does not only care about the short-term effects his donation would have on cats and dogs around the World. He carefully explains and hugely emphasizes that he wants his choice to be the one that is best, all things considered (i.e., not bracketing out effects on beings other than companion animals or effects on the long-term future).[2] You think about his request and, despite your great forecasting abilities, quickly come to appreciate how impossible the task is. The number and the complexity of causal ramifications and potentially decisive flow-through effects to consider are overwhelming. It is highly implausible a donation of that size does not somehow change important aspects of the course of History in some non-negligible ways. Even if it is very indirect, it will inevitably affect many people’s attitudes towards dogs and cats, the way these people live, their values, their consumption, economic growth, technological development, human and animal population sizes, the likelihood of a third World War and the exact actors which would involved, etc. Some aspects of these effects are predictable. Many others are way too chaotic. And you cannot reasonably believe these chaotic changes will be even roughly the same no matter whether the beneficiaries of the donation are dog or cat shelters. If the billionaire picks cats over dogs, this will definitely end up making the World counterfactually better or worse, all things considered, to a significant extent. The problem is you have no idea which it is. In fact, you even have no idea whether donating his money to either will turn out overall better than not donating it to begin with.
Here's how OpenAI's image generator portrays the scene:
I have two questions for you.
1. Can you think of any reasonable objection to the strongly implied takeaway that the philanthropy advisor should be agnostic about the sign of the overall consequences of the donation, there?
2. Is that a good illustration of the motivations for cluelessness? I like it more than, e.g., Greaves' (2016) grandma-crossing-the-street example and Mogensen's (2021) 'AMF vs Make-A-Wish Foundation' one because there is no pre-established intuition that one is "obviously" better than the other (so we avoid biases). Also, it is clear in the above thought experiment that our choice matters a bunch despite our cluelessness. It's obvious that the "the future remains unchanged" (/ "ripple in the pond") objection doesn't work (see, e.g., Lenman 2000; Greaves 2016). I also find this story easy to remember. What do you think?
I also hope this thought experiment will be found interesting by some others and that me posting this may be useful beyond just me potentially getting helpful feedback on it.
- ^
For simplicity, let’s assume it can only be 100% one or the other. He cannot split between the two.
- ^
You might wonder why the billionaire only considers donating to dog or cat shelters and not to other causes given that he so crucially cares about the overall effects on the World from now till its end. Well, maybe he has special tax-deductibility benefits from donating to such shelters. Maybe his 12-year-old daughter will get mad at him if he gives to anything else. Maybe the money he wants to give is some sort a coupon that only dog and cat shelters can receive for some reason. Maybe you end up asking him why and he answers ‘none of your business!’. Anyway, this of course does not matter for the sake of the thought experiment.
Not sure what I overall think of the better odds framing, but to speak in its defence: I think there's a sense in which decisions are more real than beliefs. (I originally wrote "decisions are real and beliefs are not", but they're both ultimately abstractions about what's going on with a bunch of matter organized into an agent-like system.) I can accept the idea of X as an agent making decisions, and ask what those decisions are and what drives them, without implicitly accepting the idea that X has beliefs. Then "X has beliefs" is kind of a useful model for predicting their behaviour in the decision situations. Or could be used (as you imply) to analyse the rationality of their decisions.
I like your contrived variant of the pi case. But to play on it a bit:
In this picture, no realistic amount of thinking I'm going to do will bring it down to just a point estimate being defensible, and perhaps even the limit with infinite thinking time would have me maintain an interval of what seems defensible, so some fundamental indeterminacy may well remain.
But to my mind, this kind of behaviour where you can tighten your understanding by thinking more happens all of the time, and is a really important phenomenon to be able to track and think clearly about. So I really want language or formal frameworks which make it easy to track this kind of thing.
Moreover, after you grant this kind of behaviour [do you grant this kind of behaviour?], you may notice that from our epistemic position we can't even distinguish between:
Because of this, from my perspective the question of whether credences are ultimately indeterminate is ... not so interesting? It's enough that in practice a lot of credences will be indeterminate, and that in many cases it may be useful to invest time thinking to shrink our uncertainty, but in many other cases it won't be.