There are two decision theories: causal and evidential, which often agree in normal cases but disagree in weird ones, e.g. Newcomb's paradox, so the paradox teases out our competing intuitions on how to make decisions.
Source: Hilary Greaves on 80k podcast
Setup
There are two boxes in front of you: a transparent one that you can see contains £1000 and an opaque box that either contains a million pounds or nothing. Your choice is to either take both boxes or just the opaque box.
The catch is that a very good predicter has predicted your decision and has acted, (based on their prediction) as follows:
- If they predict that you're going to take both boxes, they put nothing in the opaque box.
- If they predict you're just going to take the opaque box, they put 1 million pounds in it.
So, what should you do?
There are 2 theories on how to approach this:
Causal decision theory
This notices that the predictor has made their prediction and then fucked off, so there's no mechanism for your choice to interact with their prediction/ to cause anything, so your options are just: £1,000 and possible a million; or just the possibility of a million. You should clearly take the former, so causal decision theorists would choose both boxes.
Evidential decision theory
While your decision won't cause anything, it's evidence of what the predictor predicted, and so it's evidence of what's in the opaque box. You should choose just the opaque box as the predictor would anticipate this thought process, predict you will pick just the opaque box, and put a million quid in it. If you want to be sneaky, by thinking that the predictor will predict you'll pick just the opaque box but you actually choose both, the predictor will anticipate this and leave the opaque box empty.
In other words, if it's overwhelmingly likely that the predictor will predict correctly, then if you choose just the opaque box, it's overwhelmingly likely the predictor would predict this, so it's overwhelmingly likely you'll get the million. If you choose both boxes it's overwhelmingly likely the predictor will predict this and make the opaque box empty, so it's overwhelmingly likely you'll just get the thousand pounds.
Another example: smoking lesions
In this example, the causal decision theorist's intuition is much more obvious. Imagine that the presence of smoking lesions causes 2 things: cancer and the disposition to smoke. (In this world, smoking doesn't cause cancer, and smoking is pleasant). The question is, in this world, should I smoke? Wanting to smoke is evidence of the smoking lesion, but it doesn’t cause anything at all, so I should smoke (if I enjoy smoking).
My intuition is evidential in the 1st case but causal in the 2nd, so if anyone can explain the difference between the cases, that would be great. Thanks!
You haven't understood. Your analogy fails because your friend isn't incentivised to select against you and try to make you guess incorrectly.
Obviously, from the predictor's perspective, there can be some explicable variance and some inexplicable variance, and it's plausible to claim that some of the inexplicable variance comes from decisions that have not yet been made. But the question states that the predictor has an exceedingly good track record, so the vast, vast majority of the variance can be explained.
You can claim that the predictor thinks you're 99.98% to take both boxes, but you know that you're actually only 99.96% to take both boxes. But that doesn't help you make nonnegligible money in the game, and you're just missing the point of the 'paradox'.
What I said was correct. It holds up in the stochastic case where the predictor is nearly certain of your decision, though it's simpler to think about the deterministic case where the predictor is certain.
I'm disappointed by 'Effective' 'Altruists' circle jerking around yet another wrong answer. :')