HaydenW

PhD student in philosophy at the ANU; previously worked at the Global Priorities Institute

Comments

Expected value theory is fanatical, but that's a good thing

Yep, we've got pretty good evidence that our spacetime will have infinite 4D volume and, if you arranged happy lives uniformly across that volume, we'd have to say that the outcome is better than any outcome with merely finite total value. Nothing logically impossible there (even if it were practically impossible).

That said, assigning value "∞" to such an outcome is pretty crude and unhelpful. And what it means will depend entirely on how we've defined ∞ in our number system. So, what I think we should do in such a case is not say V equals such and such. Instead, ditch the value function when you've left the domain where it works. Instead, just deal with your set of possible outcomes, your lotteries (probability measures over that set), and a betterness relation which might sometimes follow a value function but might also extend to outcomes beyond the function's domain. That's what people tend to do in the infinite aggregation literature (including the social choice papers that consider infinite time horizons), and for good reason.

Expected value theory is fanatical, but that's a good thing

That'd be fine for the paper, but I do think we face at least some decisions in which EV theory gets fanatical. The example in the paper - Dyson's Wager - is intended as a mostly realistic such example. Another one would be a Pascal's Mugging case in which the threat was a moral one. I know I put P>0 on that sort of thing being possible, so I'd face cases like that if anyone really wanted to exploit me. (That said, I think we can probably overcome Pascal's Muggings using other principles.)

Expected value theory is fanatical, but that's a good thing

Thanks!

Good point about Minimal Tradeoffs. But there is a worry that if you don't make it a fixed r then you could have an infinite sequence of decreasing rs but they don't go arbitrarily low. (e.g., 1, 3/4, 5/8, 9/16, 17/32, 33/64, ...)

I agree that Scale-Consistency isn't as compelling as some of the other key principles in there. And, with totalism, it could be replaced with the principle you suggest in which multiplication is just duplicating the world k. Assuming totalism, that'd be a weaker claim, which is good. I guess one minor worry is that, if we reject totalism, duplicating a world k times wouldn't scale its value by k. So Scale-Consistency is maybe the better principle for arguing in greater generality. But yeah, not needed for totalism.


>Nor can they say that Lsafe plus an additional payoff b is better than Lrisky plus the same b. 
They can't say this for all b, but they can for some b, right? Aren't they saying exactly this when they deny Fanaticism ("If you deny Fanaticism, you know that no matter how your background uncertainty is resolved, you will deny that Lrisky plus b is better than Lsafe plus b.")? Is this meant to follow from Lrisky+B≻Lsafe+B? I think that's what you're trying to argue after, though.

Nope, wasn't meaning for the statement involving little b to follow from the one about big B. b is a certain payoff, while B is a lottery. When we add b to either lottery, we're just adding a constant to all of the payoffs. Then, if lotteries can be evaluated by their cardinal payoffs, we've got to say that L_1 +b > L_2 +b iff L_1 > L_2.

Aren't we comparing lotteries, not definite outcomes? Your vNM utility function could be arctan(∑iui), where the function inside the arctan is just the total utilitarian sum. Let Lsafe=π2, and Lrisky=∞ with probability 0.5 (which is not small, but this is just to illustrate) and 0 otherwise. Then these have the same expected value without a background payoff (or b=0), but with b>0, the safe option has higher EV, while with b<0, the risky option has higher EV.

Yep, that utility function is bounded, so using it and EU theory will avoid Fanaticism and bring on this problem. So much the worse for that utility function, I reckon.

And, in a sense, we're not just comparing lotteries here. L_risky + B is two independent lotteries summed together, and we know in advance that you're not going to affect B at all. In fact, it seems like B is the sort of thing you shouldn't have to worry about at all in your decision-making. (After all, it's a bunch of events off in ancient India or in far distant space, outside your lightcone.) In the moral setting we're dealing with, it seems entirely appropriate to cancel B from both sides of the comparison and just look at L_risky and L_safe, or to conditionalise the comparison on whatever B will actually turn out as: some b. That's roughly what's going on there.

Expected value theory is fanatical, but that's a good thing

Just a note on the Pascal's Mugging case: I do think the case can probably be overcome by appealing to some aspect of the strategic interaction between different agents. But I don't think it comes out of the worry that they'll continue mugging you over and over. Suppose you (morally) value losing $5 to the mugger at -5 and losing nothing at 0 (on some cardinal scale). And you value losing every dollar you ever earn in your life at -5,000,000. And suppose you have credence (or, alternatively, evidential probability) of p that the mugger can and will generate any among of moral value or disvalue they claim they will. Then, as long as they claim they'll bring about an outcome worse than -5,000,000/p if you don't give them $5, or they claim they'll bring about an outcome better than +5,000,000/p if you do, then EV theory says you should hand it over. And likewise for any other fanatical theory, if the payoff is just scaled far enough up or down.

Expected value theory is fanatical, but that's a good thing

Yes, in practice that'll be problematic. But I think we're obligated to take both possible payoffs into account. If we do suspect the large negative payoffs, it seems pretty awful to ignore them in our decision-making. And then there's a weird asymmetry if we pay attention to the negative payoffs but not the positive.

More generally, Fanaticism isn't a claim about epistemology. A good epistemic and moral agent should first do their research, consider all of the possible scenarios in which their actions backfire, and put appropriate probabilities on them. If as they do the epistemic side right, it seems fine for them to act according to Fanaticism when it comes to decision-making. But in practice, yeah, that's going to be an enormous 'if'.

Expected value theory is fanatical, but that's a good thing

Both cases are traditionally described in terms of payoffs and costs just for yourself, and I'm not sure we have quite as strong a justification for being risk-neutral or fanatical in that case. In particular, I find it at least a little plausible that individuals should effectively have bounded utility functions, whereas it's not at all plausible that we're allowed to do that in the moral case - it'd lead something a lot like the old Egyptology objection.

That said, I'd accept Pascal's wager in the moral case. It comes out of Fanaticism fairly straightforwardly, with some minor provisos. But Pascal's Mugging seems avoidable - for it to arise, we need another agent interacting with you strategically to get what they want. I think it's probably possible for an EV maximiser to avoid the mugging as long as we make their decision-making rule a bit richer in strategic interactions. But that's just speculation - I don't have a concrete proposal for that!

Why not give 90%?

This is pretty off-topic, sorry.

Why not give 90%?
I think this is actually quite a complex question.

Definitely! I simplified it a lot in the post.

If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

Good point! I hadn't thought of this. I think it ends up being best to frontload if your annual risk of giving up isn't very sensitive to the amount you donate, it's high, and your income isn't going to increase a whole lot over your lifetime. I think those first two things might be true of a lot of people. And so will the third thing, effectively, if your income doesn't increase by more than 2-3x.

If we take the data from here with 0 grains of salt, you're actually less likely to have value drift at 50% of income (~43.75% chance of value drift) than 10% (~63.64% of value drift). There are many reasons this might be, such as consistency and justification effects, but the point is the object level question is complicated :).

My guess is that the main reason for that is that more devoted people tend to pledge higher amounts. I think if you took some of those 10%ers and somehow made them choose to switch to 50%, they'd be far more likely than before to give up.

But yeah, it's not entirely clear that P(giving up) increases with amount donated, or that either causally affects the other. I'm just going by intuition on that.

Why not give 90%?
In reality, if we can figure out how to give a lot for one or two years without becoming selfish, we are more likely to sustain that for a longer period of time. This boosts the case for making larger donations.

Yep, I agree. In general, the real-life case is going to be more complicated in a bunch of ways, which tug in both directions.

Still, I suspect that, even if someone managed to donate a lot for a few years, there'd still be some small independent risk of giving up each year. And even a small such risk cuts down your expected lifetime donations by quite a bit: e.g., a 1% p.a. risk of giving up for 37 years cuts down the expected value by 16% (and far more if your income increases over time).

Moreover, I rather doubt that the probability of turning selfish and giving up on Effective Altruism can be nearly as high as 50% in a given year. If it were that high, I think we'd have more evidence of it, in spite of the typical worries regarding how we can hear back from people who aren't interested anymore.

Yep, that seems right. Certainly at the 10% donation level, it should be a lot lower than 50% (I hope!). I was thinking of 50% p.a. as the probability of giving up after ramping up to 90% per year, at least in my own circumstances (living on a pretty modest grad student stipend).

Also, there's a little bit of relevant data on this in this post. Among the 38 people that person surveyed, the dropout rate was >50% over 5 years. So it's pretty high at least. But not clear how much of that was due to feeling it was too demanding and then getting demotivated, rather than value drift.

Also, this doesn't break your point, but I think percentages are the wrong way to think about this. In reality, donations should be much more dependent upon local cost of living than upon your personal salary. If COL is $40k and you make $50k then donate up to $10k. If COL is $40k and you make $200k then donate up to $160k.

Yes, good point! I'd agree that that's a better way to look at it, especially for making broad generalisations over different people.

Why not give 90%?
The assumption that if she gives up, she is most likely to give up on donating completely seems not obvious to me. I would think that it's more likely she scales back to a lower level, which would change the conclusion.

Yep, I agree that that's probably more likely. I focused on giving up completely to keep things simple. But if it's even somewhat likely (say, 1% p.a.), that may make a far bigger dent in your expected lifelong donations than do risks of giving up partially.

Perhaps we should be encouraging a strategy where people increase their percentage donated by a few percentage points per year until they find the highest sustainable level for them. Combined with a community norm of acceptance for reductions in amounts donated, people could determine their highest sustainable donation level while lowering risk of stopping donations entirely.

That certainly sounds sensible to me!

Load More