Hide table of contents

There are two decision theories: causal and evidential, which often agree in normal cases but disagree in weird ones, e.g. Newcomb's paradox, so the paradox teases out our competing intuitions on how to make decisions.

Source: Hilary Greaves on 80k podcast

Setup

There are two boxes in front of you: a transparent one that you can see contains £1000 and an opaque box that either contains a million pounds or nothing. Your choice is to either take both boxes or just the opaque box.

 

The catch is that a very good predicter has predicted your decision and has acted, (based on their prediction) as follows:

  • If they predict that you're going to take both boxes, they put nothing in the opaque box.
  • If they predict you're just going to take the opaque box, they put 1 million pounds in it. 

 

So, what should you do?

There are 2 theories on how to approach this:

 

Causal decision theory

This notices that the predictor has made their prediction and then fucked off, so there's no mechanism for your choice to interact with their prediction/ to cause anything, so your options are just: £1,000 and possible a million; or just the possibility of a million. You should clearly take the former, so causal decision theorists would choose both boxes.

 

Evidential decision theory

While your decision won't cause anything, it's evidence of what the predictor predicted, and so it's evidence of what's in the opaque box. You should choose just the opaque box as the predictor would anticipate this thought process, predict you will pick just the opaque box, and put a million quid in it. If you want to be sneaky, by thinking that the predictor will predict you'll pick just the opaque box but you actually choose both, the predictor will anticipate this and leave the opaque box empty.

 

In other words, if it's overwhelmingly likely that the predictor will predict correctly, then if you choose just the opaque box, it's overwhelmingly likely the predictor would predict this, so it's overwhelmingly likely you'll get the million. If you choose both boxes it's overwhelmingly likely the predictor will predict this and make the opaque box empty, so it's overwhelmingly likely you'll just get the thousand pounds. 

 

Another example: smoking lesions 

In this example, the causal decision theorist's intuition is much more obvious. Imagine that the presence of smoking lesions causes 2 things: cancer and the disposition to smoke. (In this world, smoking doesn't cause cancer, and smoking is pleasant). The question is, in this world, should I smoke? Wanting to smoke is evidence of the smoking lesion, but it doesn’t cause anything at all, so I should smoke (if I enjoy smoking). 

 

My intuition is evidential in the 1st case but causal in the 2nd, so if anyone can explain the difference between the cases, that would be great. Thanks!

2

0
0

Reactions

0
0
Comments11
Sorted by Click to highlight new comments since: Today at 11:46 AM

There are a lot more than two decision theories. Most are designed to do equally well or better than both causal and evidential decision theory in Newcomb-like problems and even more exotic setups.

The basic idea in all of them is that, instead of choosing the best decision at any particular decision point, they choose the best decision-making algorithm across possible world states.

I think this 'paradox' is chronically misunderstood. Many people claim that the player can choose whether or not to take the transparent box after the predictor makes his prediction, but this is not how humans actually seem to make decisions and it directly contradicts the setup of the question - so I claim that your 'causal' solution is just wrong.

In order for the predictor to be able to make accurate predictions, players' decisions must be deducible at the time the prediction is made. Depending on your mental model of free will (or the lack thereof), this might seem completely plausible or utterly absurd. But for the paradox's setup to make sense, the player must have, in some sense, made his decision before the prediction is made: he is either someone who is going to take both boxes or someone who is just going to take the opaque box.

Simply put, if you're going to reason about this using causality, then you have to explain the causality of the predictor's predictions. And once you explain this, it becomes clear that the causal approach agrees with the evidential approach: you should take only the opaque box. It will feel like you're making the decision while looking at the boxes, but you actually made the decision long before (if at all).

'But for the paradox's setup to make sense, the player must have, in some sense, made his decision before the prediction is made'  No, there just has to be something that occurs earlier which guarantees what decision the player makes during the game. But if determinism is true, that thing could be the first event in the universe's history, which would definitely not be a decision of the player. I think maybe your thinking that if that's the case, 'the set-up doesn't make sense', because the player can't choose otherwise and therefore their decision can't be evaluated as rational or anything else. But it's a very substantive philosophical assumption that if your decisions are guaranteed by the past before the decision, they can't be evaluated for rationality (or morality or whatever) at the time they occur. Roughly that amounts to rejecting compatibilism about free will, which is the standard philosophical view*.

*https://survey2020.philpeople.org/survey/results/4838 Roughly 57-9% of English-speaking philosophers endorse it.

I don't know what point you're trying to make, because your response was rambling, poorly formatted and incoherent.

Are you just agreeing with me that the 'paradox' is solved and also nitpicking by claiming that it's possible that humans don't make decisions at all? If not, then I think you're very confused.

Basically, if your final decision was knowable to the predictor before he made his prediction, then it doesn't make sense, after his prediction is locked in, to say, "The predictor has already made his prediction, so the decision I make now can't affect his prediction." The predictor knew what your final decision was going to be.

I'm not making any bold claims about free will; I'm just pointing out that the 'causal' arguments for taking both boxes are contradicting the setup of the question.

I don't understand what point your making either. Probably this won't be productive to continue.

But for the paradox's setup to make sense, the player must have, in some sense, made his decision before the prediction is made: he is either someone who is going to take both boxes or someone who is just going to take the opaque box.

 

This doesn't seem correct. It's possible to make a better than random guess about what a person will decide in the future, even if the person has not yet made their decision.

This is not mysterious in ordinary contexts. I can make a plan to meet with a friend and justifiably have very high confidence that they'll show up at the agreed time. But that doesn't preclude that they might in fact choose to cancel at the last minute.

Curated and popular this week
Relevant opportunities