MichaelStJules

Associate researcher in animal welfare at Rethink Priorities. Writing on behalf of myself only.

Also interested in ethics, philosophy of mind and reducing s-risks.

My background is mostly in pure math, computer science and deep learning, but also some in statistics/econometrics and agricultural economics.

I consider myself suffering-focused, anti-speciesist, prioritarian and consequentialist.

My shortform.

Topic Contributions

Comments

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

Why try to maximize EV at all, though?

I think Dutch book/money pump arguments require you to rank unrealistic hypotheticals (e.g. where your subjective probabilities in, say, extinction risk are predictably manipulated by an adversary), and the laws of large numbers and central limit theorems can have limited applicability, if there are too few statistically independent outcomes.

Even much of our uncertainty should be correlated across agents in a multiverse, e.g. uncertainty about logical implications, facts or tendencies about the world. We can condition on some of those uncertain possibilities separately, apply the LLN or CLT to each across the multiverse, and then aggregate over the conditions, but I'm not convinced this works out to give you EV maximization.

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

FWIW, stochastic dominance is a bit stronger than you write here, since you can allow A to strictly beat B at only some quantiles, but equality at the rest, and then A dominates B.

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

(Note: I've made several important additions to this comment within the first ~30 minutes of posting it, plus some more minor edits after.)

I think this is an important point, so I've given you a strong upvote. Still, I think total utilitarians aren't rationally required to endorse EV maximization or longtermism, even approximately except under certain other assumptions.

Tarsney has also written that stochastic dominance doesn't lead to EV maximization or longtermism under total utilitarianism, if the probabilities (probability differences) are low enough, and has said it's plausible the probabilities are in fact that low (not that he said it's his best guess they're that low). See "The epistemic challenge to longtermism", and especially footnote 41.

It's also not clear to me that we shouldn't just ignore background noise that's unaffected by our actions or generally balance other concerns against stochastic dominance, like risk aversion or ambiguity aversion, particularly with respect to the difference one makes, as discussed in "The case for strong longtermism" by Greaves and MacAskill in section 7.5. Greaves and MacAskill do argue that ambiguity aversion with respect to the outcomes doesn't point against existential risk reduction, and if I recall correctly from following citations, that ambiguity aversion with respect to the difference one makes is too agent-relative.

On the other hand, using your own precise subjective probabilities to define rational requirement seems pretty agent-relative to me, too. Surely, if the correct ethics is fully agent-neutral, you should be required to do what actually maximizes value among available options, regardless of your own particular beliefs about what's best. Or, at least, precise subjective probabilities seem hard to defend as agent-neutral, when different rational agents could have different beliefs even with access to the same information, due to different priors or because they weigh evidence differently.

Plus, without separability (ignoring what's unaffected) in the first place, the case for utilitarianism itself seems much weaker, since the representation theorems that imply utilitarianism, like Harsanyi's (and generalization here) and the deterministic ones like the one here, require separability or something similar.

St. Petersburg Demon – a thought experiment that makes me doubt Longtermism

Expected utility maximization with unbounded utility functions is in principle vulnerable to Dutch books/money pumps, too, with a pair of binary choices in sequence, but who have infinitely many possible outcomes. See https://www.lesswrong.com/posts/gJxHRxnuFudzBFPuu/better-impossibility-result-for-unbounded-utilities?commentId=hrsLNxxhsXGRH9SRx

The case to abolish the biology of suffering as a longtermist action

I'm saying the amount of suffering is not just the output of some algorithm or something written in memory. I would define it functionally/behaviourally, if at all, although possibly at the level of internal behaviour, not external behaviour. But it would be more complex than your hypothesis makes it out to be.

The case to abolish the biology of suffering as a longtermist action

and suffering was a single number stored in memory

I think it's extraordinarily unlikely suffering could just be this. Some discussion here.

Physical theories of consciousness reduce to panpsychism

Maybe one good place to draw lines is whether the system does "better than chance" at implementing a function in some way that's correlated with inputs, but it's not clear that rules out panpsychism.

New 80k problem profile - Climate change

I think all effects in practice are indirect, but "direct" can be used to mean a causal effect about which we have direct evidence, i.e. we made observations about the cause on the outcome without need for discussing intermediate outcomes, not from piecing multiple steps of causal effects together in a chain. The longer the causal chain, the more likely there are to be effects in the opposite direction along parallel chains. Furthermore, we should generally be skeptical of any causal claim, so the longer the causal chain, the more claims of which we should be skeptical, and the weaker we should expect the overall effect.

Load More