All of HaydenW's Comments + Replies

Expected value theory is fanatical, but that's a good thing

Yep, we've got pretty good evidence that our spacetime will have infinite 4D volume and, if you arranged happy lives uniformly across that volume, we'd have to say that the outcome is better than any outcome with merely finite total value. Nothing logically impossible there (even if it were practically impossible).

That said, assigning value "∞" to such an outcome is pretty crude and unhelpful. And what it means will depend entirely on how we've defined ∞ in our number system. So, what I think we should do in such a ca... (read more)

Expected value theory is fanatical, but that's a good thing

That'd be fine for the paper, but I do think we face at least some decisions in which EV theory gets fanatical. The example in the paper - Dyson's Wager - is intended as a mostly realistic such example. Another one would be a Pascal's Mugging case in which the threat was a moral one. I know I put P>0 on that sort of thing being possible, so I'd face cases like that if anyone really wanted to exploit me. (That said, I think we can probably overcome Pascal's Muggings using other principles.)

Expected value theory is fanatical, but that's a good thing

Thanks!

Good point about Minimal Tradeoffs. But there is a worry that if you don't make it a fixed r then you could have an infinite sequence of decreasing rs but they don't go arbitrarily low. (e.g., 1, 3/4, 5/8, 9/16, 17/32, 33/64, ...)

I agree that Scale-Consistency isn't as compelling as some of the other key principles in there. And, with totalism, it could be replaced with the principle you suggest in which multiplication is just duplicating the world k. Assuming totalism, that'd be a weaker claim, which is good. I guess one minor ... (read more)

3MichaelStJules2y
Oh, also you wrote "Lais better thanLb" in the definition of Minimal Tradeoffs, but I think you meant the reverse? Isn't the problem if ther's approach 1? Specifically, for each lottery, get the infimum of ther's that work (it should be≤1), and then take the supremum of those over each lottery. Your definition requires that this supremum is < 1. Hmm, I think this kind of stochastic separability assumption implies risk-neutrality (under the assumption of independence of irrelevant alternatives?), since it will force your rankings to be shift-invariant. If you do maximize the expected value of some function of the total utilitarian sum (you're a vNM-rational utilitarian), then I think it should rule out non-linear functions of that sum. However, what if we maximize the expected value of some function of the difference we make (e.g. compared to a "business as usual" option, subtracting the value of that option)? This way, we have to ignore the independent backgroundBsince it gets cancelled, and we can use a bounded vNM utility function on what's left. One argument I've heard against this (from section 4.2 here [https://globalprioritiesinstitute.org/wp-content/uploads/2020/Greaves_MacAskill_strong_longtermism.pdf] ) is that it's too agent-relative, but the intuition for stochastic separability itself seems kind of agent-relative, too. I suppose there are slightly different ways of framing stochastic separability, "What I can't affect shouldn't change what I should do" vs "What isn't affected shouldn't change what's best", with only the former agent-relative, although also more plausible given agent-relative ethics. If I reject agent relative ethics, neither seems so obvious.
Expected value theory is fanatical, but that's a good thing

Just a note on the Pascal's Mugging case: I do think the case can probably be overcome by appealing to some aspect of the strategic interaction between different agents. But I don't think it comes out of the worry that they'll continue mugging you over and over. Suppose you (morally) value losing $5 to the mugger at -5 and losing nothing at 0 (on some cardinal scale). And you value losing every dollar you ever earn in your life at -5,000,000. And suppose you have credence (or, alternatively, evidential probability) of p that the mugger can a... (read more)

Expected value theory is fanatical, but that's a good thing

Yes, in practice that'll be problematic. But I think we're obligated to take both possible payoffs into account. If we do suspect the large negative payoffs, it seems pretty awful to ignore them in our decision-making. And then there's a weird asymmetry if we pay attention to the negative payoffs but not the positive.

More generally, Fanaticism isn't a claim about epistemology. A good epistemic and moral agent should first do their research, consider all of the possible scenarios in which their actions backfire, and put appropriate pro... (read more)

Expected value theory is fanatical, but that's a good thing

Both cases are traditionally described in terms of payoffs and costs just for yourself, and I'm not sure we have quite as strong a justification for being risk-neutral or fanatical in that case. In particular, I find it at least a little plausible that individuals should effectively have bounded utility functions, whereas it's not at all plausible that we're allowed to do that in the moral case - it'd lead something a lot like the old Egyptology objection.

That said, I'd accept Pascal's wager in the moral case. It comes out of ... (read more)

Why not give 90%?

This is pretty off-topic, sorry.

-7srh32y
1Pigman2y
I see, thanks for the feedback, wasn't aware of the forum's rules
Why not give 90%?
I think this is actually quite a complex question.

Definitely! I simplified it a lot in the post.

If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

Good point! I hadn't thought of this. I think it ends up being best to frontload if your annual risk of giving up isn't very sensitive to the amount you donate, it's high, and your income isn't going to increase a whole lot over your lifetime. I think those first two things m... (read more)

4Jamie_Harris2y
<<My guess is that the main reason for that is that more devoted people tend to pledge higher amounts.>> That could account for part of it, though, according to this article [https://www.sciencedirect.com/science/article/abs/pii/S0738399111003855], "multiple studies have demonstrated that people perform better when goals are set higher and made more challenging.” I haven't looked into this in more detail, but I've heard other social scientists who research behaviour change make similar claims (e.g. on this podcast [http://www.arzonepodcasts.com/2019/03/casey-taft-arzone-vegfest-uk-interview.html] ). My guess is that there's a sweet spot of challenge/demandingness that is optimal, and that that sweet spot varies substantially by the individual. (PS thanks for this post, I've had similar thoughts before and like the theoretical demonstration in expected value terms of the risk of giving up.)
Why not give 90%?
In reality, if we can figure out how to give a lot for one or two years without becoming selfish, we are more likely to sustain that for a longer period of time. This boosts the case for making larger donations.

Yep, I agree. In general, the real-life case is going to be more complicated in a bunch of ways, which tug in both directions.

Still, I suspect that, even if someone managed to donate a lot for a few years, there'd still be some small independent risk of giving up each year. And even a small such risk cuts down your expected lifetime donation... (read more)

Why not give 90%?
The assumption that if she gives up, she is most likely to give up on donating completely seems not obvious to me. I would think that it's more likely she scales back to a lower level, which would change the conclusion.

Yep, I agree that that's probably more likely. I focused on giving up completely to keep things simple. But if it's even somewhat likely (say, 1% p.a.), that may make a far bigger dent in your expected lifelong donations than do risks of giving up partially.

Perhaps we should be encouraging a strategy where people increase th
... (read more)
Problems with EA representativeness and how to solve it

I'd add one more: having to put your resources towards more speculative, chancy causes is more demanding.

When donating our money and time to something like bednets, the cost is mitigated by the personal satisfaction of knowing that we've (almost certainly) had an impact. When donating to some activity which has only a tiny chance of success (e.g., x-risk mitigation), most of us won't get quite the same level of satisfaction. And that's pretty demanding to have to give up not only a large chunk of your resources but also the satisfaction of having actually... (read more)

0KevinWatkinson4y
Thanks for that link, it's an interesting article. In the context of theory within the animal movement Singer's pragmatism isn't particularly demanding, but a more justice oriented approach is (along the lines of Regan). In my view it would be a good thing not least for the sake of diversity of viewpoints to make more claims around demandingness rather than largely following a less demanding position. Though i do think that because people are not used to ascribing significant moral value to other animals then it follows that anything more than the societal level is therefore considered demanding, particularly in regard to considering speciesism alongside other forms of human discrimination.
Job opportunity at the Future of Humanity Institute and Global Priorities Institute

Sorry about that, I hadn't seen that thread. Consider me well and truly chastened!

New climate change report from Giving What We Can

Hi Sam,

Thanks! Glad you liked it. It's currently just a preview and not actually published yet, so that's why some links and functionality may not work (and the post on the model I used is still yet to go up).

In regards to Q1 - I would like to, yeah. When it comes to the probabilities of different levels of warming though, it's super uncertain. The ~1% chance of 10 degrees of warming is only under one of several possible probability distributions and we really just don't have any clue which of those distributions is accurate. And in addition to the uncerta... (read more)