M

MichaelStJules

Independent researcher
12076 karmaJoined Working (6-15 years)Vancouver, BC, Canada

Bio

Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.

I've also done economic modelling for some animal welfare issues.

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2552

Topic contributions
12

For a given individual, can they have a higher probability of averting extinction (i.e. making the difference) or for a long-term trajectory change? If you discount small enough probabilities of making a difference or are otherwise difference-making risk averse (as an individual), would one come out ahead as a result?

Some thoughts: extinction is a binary event. But there's a continuum of possible values that future agents could have, including under value lock-in. A small tweak in locked-in values seems more achievable counterfactually than being the difference for whether we go extinct. And a small tweak in locked-in values would still have astronomical impact if they persist into the far future. It seems like value change might depend less on very small probabilities of making a difference.

Since others have discussed the implications, I want to push a bit on the assumptions.

I worry that non-linear axiologies[1] end up endorsing egoism, helping only those whose moral patienthood you are most confident in or otherwise prioritizing them far too much over those of less certain moral patienthood. See Oesterheld, 2017 and Tarsney, 2023.

(I also think average utilitarianism in particular is pretty bad, because it would imply that if the average welfare is negative (even torturous), adding bad lives can be good, as long as they're even slightly less bad than average.)

Maybe you can get around this with non-aggregative or partially aggregative views. EDIT: Or, if you're worried about fanaticism, difference-making views.

  1. ^

    Assuming completeness, transitivity and the independence of irrelevant alternatives and each marginal moral patient matters less.

I don't think it's valuable to ensure future moral patients exist for their own sake, and extinction risk reduction only really seems to expectably benefit humans who would otherwise die in an extinction event, which would be in the billions. An astronomical number of future moral patients could have welfare at stake if we don't go extinct, so I'd prioritize them on the basis of their numbers.

See this comment and thread.

I might go back and forth on whether "the good" exists, as my subjective order over each set of outcomes (or set of outcome distributions). This example seems pretty compelling against it.

However, I'm first concerned with "good/bad/better/worse to someone" or "good/bad/better/worse from a particular perspective". Then, ethics is about doing better by and managing tradeoffs between these perspectives, including as they change (e.g. with additional perspectives created through additional moral patients). This is what my sequence is about. Whether "the good" exists doesn't seem very important.

Hmm, interesting.

I think this bit from the footnote helped clarify, since I wasn't sure what you meant in your comment:

Note, however, that there is no assumption that d - f are outcomes for anyone to choose, as opposed to outcomes that might arise naturally. Thus, it is not clear how the appeal to choice set dependent betterness can be used to block the argument that f is not worse than d, since there are no choice sets in play here.

 

I might be inclined to compare outcome distributions using the same person-affecting rules as I would for option sets, whether or not they're being chosen by anyone. I think this can make sense on actualist person-affecting views, illustrated with my "Best in the outcome argument"s here, which is framed in terms of betterness (between two outcome distributions) and not choice. (The "Deliberation path argument" is framed in terms of choice.)

Then, I'd disagree with this:

And if, say, it makes the outcome better if an additional happy person happens to exist without anyone making it so

I'm a moral anti-realist (subjectivist), so I don't think there's an objective (stance-independent) fact of the matter. I'm just describing what I would expect to continue to endorse under (idealized) reflection, which depends on my own moral intuitions. The asymmetry is one of my strongest moral intuitions, so I expect not to give it up, and if it conflicts with other intuitions of mine, I'd sooner give those up instead.

How asymmetric do you think things are?

I think I'm ~100% on no non-instrumental benefit from creating moral patients. Also pretty high on no non-instrumental benefit from creating new desires, preferences, values, etc. within existing moral patients. (I try to develop and explain my views in this sequence.)

I haven't thought a lot about tradeoffs between suffering and other things, including pleasure, within moral patients that would exist anyway. I could see these tradeoffs going like they would for a classical utilitarian, if we hold an individual's dispositions fixed.

To be clear, I'm a moral anti-realist (subjectivist), so I don't think there's any stance-independent fact about how asymmetric things should be.

 

Also, I'm curious if we can explain why you react like this:

Maximising pleasure intuitively feels meh to me, but maximising suffering sounds pretty awful

Some ideas: Complexity of value but not disvalue or the urgency of suffering is explained by the intensity of desire, not unpleasantness? Do you have any ideas?

A pause would still give more groups more time catch up on existing research and to build infrastructure for AGI (energy, datacenters), right? Then when the pause is lifted, we could have more players at the research frontier and ready to train frontier models.

I think Thomas did not take a stance on whether it was axiological or deontic in the GPI working paper, and instead just described the structure of a possible view. Pummer described his as specifically deontic and not axiological.

I'm not sure what should be classified as axiological or how important the distinction is. I'm certainly giving up the independence of irrelevant alternatives, but I think I can still rank outcomes in a way that depends on the option set.

I'll get back to you on this, since I think this will take me longer to answer and can get pretty technical.

Load more