All of HaydenW's Comments + Replies

Even without precisely quantifying the harms each way, I think we can be pretty confident that the harms on one side are greater than on the other. It seems pretty clear that the harms of letting a non-trivial number of people experience sexual harassment and assault (or even the portion of those harms prevented by implementing a strong norm about this) are greater than the harms of preventing (even 100x as many) people from sleeping around within the community. The latter is just a far, far smaller harm per person--far less than 1/100 as great. And I thin... (read more)

[anonymous]1y10
9
3

HaydenW, thank you.

Firstly, for trying to be a good feminist - I honestly think you should get points for trying.

Secondly, for making it plain how ridiculous these arguments are. I've seen a lot of reasoning on this forum recently that goes:

more sex = more harassment & assault therefore polyamory and "sleeping around" and friends with benefits and any other form of sexual relationship I think I can get away with policing in 2023 = bad (...but obviously sex before marriage and serial monogamy and any other form of serious, "proper" sexual relationship e... (read more)

I strongly disagree with this. I've dated ~ 10 people in my life. I have also been sexually assaulted (not by someone in the community). I would quickly and without hesitation take a trade to experience 1 rape like the one I experienced (non-violent) in return to keep any of my happy relationships I've had in my life (about half of which I think wouldn't have formed absent what the author is calling "sleeping around"). For my best relationship (which initially formed via "sleeping around" and I don't think could easily have done so otherwise, and is now th... (read more)

The latter is just a far, far smaller harm per person--far less than 1/100 as great.

Surely it makes more sense to compare the upside-- someone forming a long lasting and loving relationship.

Maybe that's extreme, but taking a balance of outcomes I doubt it would be 1/100.

Also strange that you chose to say 1/100 and also 100x as many people-- surely if you have high confidence in those numbers then that would balance out by definition? Or is the somewhere where you think this sort of scale insensitivity is valid?

I feel by those numbers EAs shouldn't be dating each other at all?
 

-3
Jason
1y
(1) and (3) are also not yes/no variables. OP's model treats them as such, possibly for ease of conveying the idea. A more complex model would assign points from a range for those variables, and probably adjust the scope of "sleeping around" depending on the total point score. That might fix edge cases in which one might think the recommendation is too strict for a single yes.

The overgeneralisation isextremely easy to make. Just search "effective altruism" on twitter right now. :'( (n.b., not recommended if you care about your own emotional well-being.)

I was one of the people who commented, on what was likely version 26 or 27. (This was in November, 2021.) And Torres certainly wasn't listed as an author by that stage. I don't think I saw any comments from them on that version either, but there were a lot of comments in total so I'm not sure.

I doubt that any effective altruists would say that our wellbeing (as benefactors) doesn't matter. Nor is there any incompatibility between the basic ideas (or practice) of effective altruism on one hand, and that there are limits on our duties to help others on the other hand.

Ah, I think we've got different notions of probability in mind: the subjective credence of the agent (OpenPhil grantmakers) versus something like the objective chances of the thing actually happening, irrespective of anyone's beliefs.

2
NunoSempere
2y
Yeah, I think that if you stare at the second one, it doesn't seem that decision relevant. E.g., a coin which is either heads or tails is 100% heads with 50% probability and 100% tails with 50% probability.  And if some important decision depended on whether it was heads or tails you might not wait and find out.

OpenPhilanthropy's "hits based giving" approach seems like it doesn't fall prey to your argument, because they are willing to ignore the "Don't Prevent Impossible Harms" constraint.

For what it's worth, I don't think this is true (unless I'm misinterpreting!). Preferring low-probability, high-expected value gambles doesn't require preferring gambles with probability 0 of success.

2
NunoSempere
2y
Well, what you are saying is true if you are certain that they are 0 probability. But not if you are willing to take bets which, in hindsight, you will realize had 0 probability of occurring. 

Thanks for a brilliant post! I really enjoyed it. And in particular, as someone unfamiliar with the computational complexity stuff, your explanation of that part was great!

I have a few thoughts/questions, most of them minor. I'll try to order them from most to least important.

  1.  The recommendation for Good-Old-Fashioned-EA

If I'm understanding the argument correctly, it seems to imply that real-world agents can't assign fully coherent probability distributions over Σ in general. So, if we want to compare actions by their prospects of outcomes, we just ca... (read more)

Yep, we've got pretty good evidence that our spacetime will have infinite 4D volume and, if you arranged happy lives uniformly across that volume, we'd have to say that the outcome is better than any outcome with merely finite total value. Nothing logically impossible there (even if it were practically impossible).

That said, assigning value "∞" to such an outcome is pretty crude and unhelpful. And what it means will depend entirely on how we've defined ∞ in our number system. So, what I think we should do in such a ca... (read more)

That'd be fine for the paper, but I do think we face at least some decisions in which EV theory gets fanatical. The example in the paper - Dyson's Wager - is intended as a mostly realistic such example. Another one would be a Pascal's Mugging case in which the threat was a moral one. I know I put P>0 on that sort of thing being possible, so I'd face cases like that if anyone really wanted to exploit me. (That said, I think we can probably overcome Pascal's Muggings using other principles.)

Thanks!

Good point about Minimal Tradeoffs. But there is a worry that if you don't make it a fixed r then you could have an infinite sequence of decreasing rs but they don't go arbitrarily low. (e.g., 1, 3/4, 5/8, 9/16, 17/32, 33/64, ...)

I agree that Scale-Consistency isn't as compelling as some of the other key principles in there. And, with totalism, it could be replaced with the principle you suggest in which multiplication is just duplicating the world k. Assuming totalism, that'd be a weaker claim, which is good. I guess one minor ... (read more)

3
MichaelStJules
4y
Oh, also you wrote "La is better than Lb" in the definition of Minimal Tradeoffs, but I think you meant the reverse? Isn't the problem if the r's approach 1? Specifically, for each lottery, get the infimum of the r's that work (it should be ≤1), and then take the supremum of those over each lottery. Your definition requires that this supremum is < 1. Hmm, I think this kind of stochastic separability assumption implies risk-neutrality (under the assumption of independence of irrelevant alternatives?), since it will force your rankings to be shift-invariant. If you do maximize the expected value of some function of the total utilitarian sum (you're a vNM-rational utilitarian), then I think it should rule out non-linear functions of that sum. However, what if we maximize the expected value of some function of the difference we make (e.g. compared to a "business as usual" option, subtracting the value of that option)? This way, we have to ignore the independent background B since it gets cancelled, and we can use a bounded vNM utility function on what's left. One argument I've heard against this (from section 4.2 here) is that it's too agent-relative, but the intuition for stochastic separability itself seems kind of agent-relative, too. I suppose there are slightly different ways of framing stochastic separability, "What I can't affect shouldn't change what I should do" vs "What isn't affected shouldn't change what's best", with only the former agent-relative, although also more plausible given agent-relative ethics. If I reject agent relative ethics, neither seems so obvious.

Just a note on the Pascal's Mugging case: I do think the case can probably be overcome by appealing to some aspect of the strategic interaction between different agents. But I don't think it comes out of the worry that they'll continue mugging you over and over. Suppose you (morally) value losing $5 to the mugger at -5 and losing nothing at 0 (on some cardinal scale). And you value losing every dollar you ever earn in your life at -5,000,000. And suppose you have credence (or, alternatively, evidential probability) of p that the mugger can a... (read more)

Yes, in practice that'll be problematic. But I think we're obligated to take both possible payoffs into account. If we do suspect the large negative payoffs, it seems pretty awful to ignore them in our decision-making. And then there's a weird asymmetry if we pay attention to the negative payoffs but not the positive.

More generally, Fanaticism isn't a claim about epistemology. A good epistemic and moral agent should first do their research, consider all of the possible scenarios in which their actions backfire, and put appropriate pro... (read more)

Both cases are traditionally described in terms of payoffs and costs just for yourself, and I'm not sure we have quite as strong a justification for being risk-neutral or fanatical in that case. In particular, I find it at least a little plausible that individuals should effectively have bounded utility functions, whereas it's not at all plausible that we're allowed to do that in the moral case - it'd lead something a lot like the old Egyptology objection.

That said, I'd accept Pascal's wager in the moral case. It comes out of ... (read more)

This is pretty off-topic, sorry.

-7
srh3
4y
1
Pigman
4y
I see, thanks for the feedback, wasn't aware of the forum's rules
I think this is actually quite a complex question.

Definitely! I simplified it a lot in the post.

If the chance is high enough, it may in fact be prudent to front-load your donations, so that you can get as much out of yourself with your current values as possible.

Good point! I hadn't thought of this. I think it ends up being best to frontload if your annual risk of giving up isn't very sensitive to the amount you donate, it's high, and your income isn't going to increase a whole lot over your lifetime. I think those first two things m... (read more)

4
Jamie_Harris
4y
<<My guess is that the main reason for that is that more devoted people tend to pledge higher amounts.>> That could account for part of it, though, according to this article, "multiple studies have demonstrated that people perform better when goals are set higher and made more challenging.” I haven't looked into this in more detail, but I've heard other social scientists who research behaviour change make similar claims (e.g. on this podcast). My guess is that there's a sweet spot of challenge/demandingness that is optimal, and that that sweet spot varies substantially by the individual. (PS thanks for this post, I've had similar thoughts before and like the theoretical demonstration in expected value terms of the risk of giving up.)
In reality, if we can figure out how to give a lot for one or two years without becoming selfish, we are more likely to sustain that for a longer period of time. This boosts the case for making larger donations.

Yep, I agree. In general, the real-life case is going to be more complicated in a bunch of ways, which tug in both directions.

Still, I suspect that, even if someone managed to donate a lot for a few years, there'd still be some small independent risk of giving up each year. And even a small such risk cuts down your expected lifetime donation... (read more)

The assumption that if she gives up, she is most likely to give up on donating completely seems not obvious to me. I would think that it's more likely she scales back to a lower level, which would change the conclusion.

Yep, I agree that that's probably more likely. I focused on giving up completely to keep things simple. But if it's even somewhat likely (say, 1% p.a.), that may make a far bigger dent in your expected lifelong donations than do risks of giving up partially.

Perhaps we should be encouraging a strategy where people increase th
... (read more)

I'd add one more: having to put your resources towards more speculative, chancy causes is more demanding.

When donating our money and time to something like bednets, the cost is mitigated by the personal satisfaction of knowing that we've (almost certainly) had an impact. When donating to some activity which has only a tiny chance of success (e.g., x-risk mitigation), most of us won't get quite the same level of satisfaction. And that's pretty demanding to have to give up not only a large chunk of your resources but also the satisfaction of having actually... (read more)

0
KevinWatkinson
6y
Thanks for that link, it's an interesting article. In the context of theory within the animal movement Singer's pragmatism isn't particularly demanding, but a more justice oriented approach is (along the lines of Regan). In my view it would be a good thing not least for the sake of diversity of viewpoints to make more claims around demandingness rather than largely following a less demanding position. Though i do think that because people are not used to ascribing significant moral value to other animals then it follows that anything more than the societal level is therefore considered demanding, particularly in regard to considering speciesism alongside other forms of human discrimination.

Sorry about that, I hadn't seen that thread. Consider me well and truly chastened!

Hi Sam,

Thanks! Glad you liked it. It's currently just a preview and not actually published yet, so that's why some links and functionality may not work (and the post on the model I used is still yet to go up).

In regards to Q1 - I would like to, yeah. When it comes to the probabilities of different levels of warming though, it's super uncertain. The ~1% chance of 10 degrees of warming is only under one of several possible probability distributions and we really just don't have any clue which of those distributions is accurate. And in addition to the uncerta... (read more)