I want to find a good thought experiment that makes us appreciate how radically uncertain we should be about the very long-term effects of some actions altruistically motivated actors might take. Some have already been proposed in the 'cluelessness' literature -- a nice overview of which is given by Tarsney et al. (2024, §3)-- but I don't find them ideal, as I'll briefly suggest. So let me propose a new one. Call it the 'Dog vs Cat' dilemma:
Say you are a philanthropy advisor reputed for being unusually good at forecasting various direct and indirect effects donations to different causes can have. You are approached by a billionaire with a deep love for companion animals who wants to donate almost all his wealth to animal shelters. He asks you whether he should donate to dog shelters around the World or to cat shelters instead.[1] Despite the relatively narrow set of options he is considering, he importantly specifies that he does not only care about the short-term effects his donation would have on cats and dogs around the World. He carefully explains and hugely emphasizes that he wants his choice to be the one that is best, all things considered (i.e., not bracketing out effects on beings other than companion animals or effects on the long-term future).[2] You think about his request and, despite your great forecasting abilities, quickly come to appreciate how impossible the task is. The number and the complexity of causal ramifications and potentially decisive flow-through effects to consider are overwhelming. It is highly implausible a donation of that size does not somehow change important aspects of the course of History in some non-negligible ways. Even if it is very indirect, it will inevitably affect many people’s attitudes towards dogs and cats, the way these people live, their values, their consumption, economic growth, technological development, human and animal population sizes, the likelihood of a third World War and the exact actors which would involved, etc. Some aspects of these effects are predictable. Many others are way too chaotic. And you cannot reasonably believe these chaotic changes will be even roughly the same no matter whether the beneficiaries of the donation are dog or cat shelters. If the billionaire picks cats over dogs, this will definitely end up making the World counterfactually better or worse, all things considered, to a significant extent. The problem is you have no idea which it is. In fact, you even have no idea whether donating his money to either will turn out overall better than not donating it to begin with.
Here's how OpenAI's image generator portrays the scene:
I have two questions for you.
1. Can you think of any reasonable objection to the strongly implied takeaway that the philanthropy advisor should be agnostic about the sign of the overall consequences of the donation, there?
2. Is that a good illustration of the motivations for cluelessness? I like it more than, e.g., Greaves' (2016) grandma-crossing-the-street example and Mogensen's (2021) 'AMF vs Make-A-Wish Foundation' one because there is no pre-established intuition that one is "obviously" better than the other (so we avoid biases). Also, it is clear in the above thought experiment that our choice matters a bunch despite our cluelessness. It's obvious that the "the future remains unchanged" (/ "ripple in the pond") objection doesn't work (see, e.g., Lenman 2000; Greaves 2016). I also find this story easy to remember. What do you think?
I also hope this thought experiment will be found interesting by some others and that me posting this may be useful beyond just me potentially getting helpful feedback on it.
- ^
For simplicity, let’s assume it can only be 100% one or the other. He cannot split between the two.
- ^
You might wonder why the billionaire only considers donating to dog or cat shelters and not to other causes given that he so crucially cares about the overall effects on the World from now till its end. Well, maybe he has special tax-deductibility benefits from donating to such shelters. Maybe his 12-year-old daughter will get mad at him if he gives to anything else. Maybe the money he wants to give is some sort a coupon that only dog and cat shelters can receive for some reason. Maybe you end up asking him why and he answers ‘none of your business!’. Anyway, this of course does not matter for the sake of the thought experiment.
Ah nice, so this could mean two different things:
A. (The ‘canceling out’ objection to (complex) cluelessness:) We assume that good and bad unpredictable effects “cancel each other out” such that we are warranted to believe whatever option is best according to predictable effects is also best according to overall effects, OR
B. (Giving up on impartial consequentialism:) We reconsider what matters for our decision and simply decide to stop caring about whether our action makes the World better or worse, all things considered. Instead, we focus only on whether the parts of the World that are predictably affected a certain way are made better or worse and/or about things that have nothing to do with consequences (e.g., our intentions), and ignore the actual overall long-term impact of our decision which we cannot figure out.
I think A is a big epistemic mistake for the reasons given by, e.g., Lenman 2000; Greaves 2016; Tarsney et al 2024, §3.
Some version of B might be the right response in the scenario where we don't know what else to do anyway? I don't know. One version of B is explicitly given by Lenman who says we should reject consequentialism. Another is implicitly given by Tarsney (2022) when he says we should focus on the next thousands of years and sort of admit we have no idea what our impact is beyond that. But then we're basically saying that we "got beaten" by cluelessness and are giving up on actually trying to improve the long-term future, overall (which is what most longtermists are claiming our goal should be, for compelling ethical reasons). We can very well endorse B, but then we can't pretend we're trying to actually predictably improve the World. We're not. We're just trying to improve some aspects of the World, ignoring how this affects things overall (since we have no idea).
If you replace "altruistic endeavour" by "impartial consequentialism", in the DogvCat case, yes, absolutely. But I didn't mean to imply that cluelessness in that case generalizes to everything (although I'm also not arguing it doesn't). There might be cases where we have arguments plausibly robust to many unknown unknowns that warrant updating away from agnosticism, e.g., arguments based on logical inevitabilities or unavoidable selection effects. In this thread, I've only argued that I'd be surprised if we find such (convincing) argument for the DogVCat case, specifically. But it may very well be that this generalizes to many other cases and that we should be agnostic about many other things, to the extent that we actually care about our overall impact.
And I absolutely agree that this is an important implication of my points here. I think the reason why these problems are neglected by sympathizers of longtermism is that they (unwarrantedly) endorse A or (also unwarrantedly) assume that the fact that 'wild guesses' are often better than agnosticism in short-term geopolitical forecasting means they're also better when it comes to predicting our overall impact on the long-term future (see 'Winning isn't enough').