reallyeli

reallyeli's Comments

Harsanyi's simple “proof” of utilitarianism

Thanks for the pointer to "independence of irrelevant alternatives."

I'm curious to know how you think about "some normative weight." I think of these arguments as being about mathematical systems that do not describe humans, hence no normative weight. Do you think of them as being about mathematical systems that *somewhat* describe humans, hence *some* normative weight?

Harsanyi's simple “proof” of utilitarianism

I think this math is interesting, and I appreciate the good pedagogy here. But I don't think this type of reasoning is relevant to my effective altruism (defined as "figuring out how to do the most good"). In particular, I disagree that this is an "argument for utilitarianism" in the sense that it has the potential to convince me to donate to cause A instead of donating to cause B.

(I really do mean "me" and "my" in that sentence; other people may find that this argument can indeed convince them of this, and that's a fact about them I have no quarrel with. I'm posting this because I just want to put a signpost saying "some people in EA believe this," in case others feel the same way.)

Following Richard Ngo's post https://forum.effectivealtruism.org/posts/TqCDCkp2ZosCiS3FB/arguments-for-moral-indefinability, I don't think that human moral preferences can be made free of contradiction. Although I don't like contradictions and I don't want to have them, I also don't like things like the repugnant conclusion, and I'm not sure why the distaste towards contradictions should be the one that always triumphs.

Since VNM-rationality is based on transitive preferences, and I disagree that human preferences can or "should" be transitive, I interpret things like this as without normative weight.

Do impact certificates help if you're not sure your work is effective?

What is meant by "not my problem"? My understanding is that what is meant is "what I care about is no better off if I worry about this thing than if I don't." Hence the analogy to salary; if all I care about is $$, then getting paid in Facebook stock means that my utility is the same if I worry about the value of Google stock or if I don't.

It sounds like you're saying that, if I'm working at org A but getting paid in impact certificates from org B, the actual value of org A impact certificates is "not my problem" in this sense. Here obviously I care about things other than $$.

This doesn't seem right at all to me, given the current state of the world. Worrying about whether my org is impactful is my problem in that it might indeed affect things I care about, for example because I might go work somewhere else.

Thinking about this more, I recalled the strength of the assumption that, in this world, everyone agrees to maximize impact certificates *instead of* counterfactual impact. This seems like it just obliterates all of my objections, which are arguments based on counterfactual impact. They become arguments at the wrong level. If the market is not robust, that means more certificates for me *which is definitionally good*.

So this is an argument that if everyone collectively agrees to change their incentives, we'd get more counterfactual impact in the long run. I think my main objection is not about this as an end state — not that I'm sure I agree with that, I just haven't thought about it much in isolation — but about the feasibility of taking that kind of collective action, and about issues that may arise if some people do it unilaterally.

My personal cruxes for working on AI safety

I'm saying we need to specify more than, "The chance that the full stack of individual propositions evaluates as true in the relevant direction." I'm not sure if we're disagreeing, or ... ?

My personal cruxes for working on AI safety

Suppose you're in the future and you can tell how it all worked out. How do you know if it was right to work on AI safety or not?

There are a few different operationalizations of that. For example, you could ask whether your work obviously directly saved the world, or you could ask whether, if you could go back and do it over again with what you knew now, you would still work in AI safety.

The percentage would be different depending on what you mean. I suspect Gordon and Buck might have different operationalizations in mind, and I suspect that's why Buck's number seems crazy high to Gordon.

My personal cruxes for working on AI safety

I agree with this intuition. I suspect the question that needs to be asked is "14% chance of what?"

Do impact certificates help if you're not sure your work is effective?

I'm deciding whether organization A is effective. I see some respectable people working there, so I assume they must think work at A is effective, so I update in favor of A being effective. But unbeknownst to me, those people don't actually think work at A is effective, but they trade their impact certificates to other folks who do. I don't know these other folks.

Based on the theory that it's important to know who you're trusting, this is bad.

Do impact certificates help if you're not sure your work is effective?

"The sense in which employees are deferring to their employer's views on what to do" sounds fine to me, that's all I meant to say.

Do impact certificates help if you're not sure your work is effective?

Sure, I agree that if they're anonymous forever you can't do much. But that was just the generating context; I'm not arguing only against anonymity.

I'm arguing against impact certificate trading as a *wholesale replacement* for attempting to update each other. If you are trading certificates with someone, you are deferring to their views on what to do, which is fine, but it's important to know you're doing that and to have a decent understanding of why you differ.

Load More