Got it. The tricky thing seems to be that sensitivity to stakes is an obvious virtue in some circumstances; and (intuitively) a mistake in others. Not clear to me what marks that difference, though. Note also that maximising expected utility allows for decisions to be dictated by low-credence/likelihood states/events. That's normally intuitively fine, but sometimes leads to 'unfairness' — e.g. St. Petersburg Paradox and Pascal's wager / mugging.
I'm not entirely sure what you're getting at re the envelopes, but that's probably me missing something obvious. To make the analogy clearer: swap out monetary payouts with morally relevant outcomes, such that holding A at the end of the game causes outcome O1 and holding B causes O2. Suppose you're uncertain between T1 and T2. T1 says O1 is morally bad but O2 is permissible, and vice-versa. Instead of paying to switch, you can choose to do something which is slightly wrong on both T1 and T2, but wrong enough that doing it >10 times is worse than O1 and O2 on both theories. Again, it looks like the sortition model is virtually guaranteed to recommend taking a course of action which is far worse than sticking to either envelope on either T1 or T2 — by constantly switching and causing a large number of minor wrongs.
But agreed that we should be uncertain about the best approach to moral uncertainty!
Thanks for pointing that mistake out — that would imply he finished his BPhil at age 13! Have corrected. I meant to say that the ideas were floating around long before the book was written, but appreciate that was super unclear.
Interesting idea! However, not too sure about the simple version you've presented. As you mention, the major problem is that it neglects information about 'stakes'. You could try weighting the decision by the stakes somehow, but in cases where you have that information it seems strange to sometimes randomly and deliberately choose the option which is sub-optimal by the lights of MEC.
Also, as well as making you harder to cooperate with, inconsistent choices might over time lead you to choose a path which is worse than MEC by the lights of every theory you have some credence in. Maybe there's an anology to empirical uncertainty: suppose I've hidden $10 inside one of two envelopes and fake money in the other. You can pay me $1 for either envelope, and I'll also give you 100 further opportunities to pay me $1 to switch to the other one. Your credences are split 55%-45% between the envelopes. MEU would tell you to pick the slightly more likely envelope and be done with it. But, over the subsequent 100 chances to switch, the empirical analogue of your sortition model would just under half the time recommend paying me $1 to switch. In the end, you're virtually guaranteed to lose money. Even picking the less likely envelope would represent a better strategy, as long as you stick to it. In other words, if you're unsure between states of the world A and B, constantly switching between doing what's best given A and doing what's best given B could be worse in expectation than just coordinating all your choices around either A or B, irrespective of which is true. I'm wondering if the same is true where you're uncertain between moral theories A and B.
That said, I'm pretty sure there are some interesting ideas about 'stochastic choice' in the empirical case which might be relevant. Folks who know more about decision theory might be able to speak to that!
Thanks very much for the link, I'll take a look!
Thanks very much for the kind feedback!