Philosophy DPhil at Oxford and Parfit Scholar at GPI https://www.elliott-thornley.com/
Sweet! I've messaged him.
How about Dylan Balfour's 'Pascal's Mugging Strikes Again'? It's great.
I remember Toby Ord gave a talk at GPI where he pointed out the following:
Let L be long-term value per unit of resources and N be near-term value per unit of resources. Then spending 50% of resources on the best long-term intervention and 50% of resources on the best near-term intervention will lead you to split resources equally between A and C. But the best thing to do on a 0.5*(near-term value)+0.5*(long-term value) value function is to devote 100% of resources to B.
Yes, that all sounds right to me. Thanks for the tip about uniformity and fanaticism! Uniformity also comes up here, in the distinction between the Quantity Condition and the Trade-Off Condition.
Thanks! This is a really cool idea and I'll have to think more about it. What I'll say now is that I think your version of lexical totalism violates RGNEP and RNE. That's because of the order in which I have the quantifiers. I say, 'there exists p such that for any k...'. I think your lexical totalism only satisfies weaker versions of RGNEP and RNE with the quantifiers the other way around: 'for any k, there exists p...'.
Ah no, that's as it should be! is saying that is one of the very positive welfare levels mentioned on page 4.
Thanks! Your points about independence sound right to me.
Thanks for your comment! I think the following is a closer analogy to what I say in the paper:
Suppose apples are better than oranges, which are in turn better than bananas. And suppose your choices are:
- An apple and bananas for sure.
- An apple with probability and an orange with probability , along with oranges for sure.
Then even if you believe:
- One apple is better than any amount of oranges
It still seems as if, for some large and small , 2 is better than 1. 2 slightly increases the risk you miss out on an apple, but it compensates you for that increased risk by giving you many oranges rather than many bananas.
On your side question, I don't assume completeness! But maybe if I did, then you could recover the VNM theorem. I'd have to give it more thought.
Thanks!
And agreed! The title of the paper is intended as a riff on the title of the chapter where Arrhenius gives his sixth impossibility theorem: 'The Impossibility of a Satisfactory Population Ethics.' I think that an RC-implying theory can still be satisfactory.
Nice post! I share your meta-ethical stance, but I don't think you should call it 'moral quasi-realism'. 'Quasi-realism' already names a position in meta-ethics, and it's different to the position you describe.
Very roughly, quasi-realism agrees with anti-realism in stating:
But, in contrast to anti-realism, quasi-realism also states:
The conjunction of (1)-(3) defines quasi-realism.
What you call 'quasi-realism' might be compatible with (2) and (3), but its defining features seem to be (1) plus something like:
(1) plus (4) could point you towards two different positions in meta-ethics. It depends whether you think it's appropriate to describe the principles we'd embrace if we were more thoughtful, etc., as true.
If you think it is appropriate to describe these principles as true, then that counts as an ideal observer theory.
If you think it isn't appropriate to describe these principles as true, then your position is just anti-realism plus the claim that you do in fact try to abide by the principles that you'd embrace if you were more thoughtful, etc.