MSJ

Michael St Jules 🔸

Grantmaking contractor in animal welfare
12382 karmaJoined Working (6-15 years)Vancouver, BC, Canada

Bio

Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.

I've also done economic modelling for some animal welfare issues.

Want to leave anonymous feedback for me, positive, constructive or negative? https://www.admonymous.co/michael-st-jules

Sequences
3

Radical empathy
Human impacts on animals
Welfare and moral weights

Comments
2584

Topic contributions
15

FWIW, I wouldn’t consider planktonic animals necessarily brainless or unworthy of moral consideration. Peruvian anchoveta eat krill, which I imagine to be sentient with modest probability, and copepods, which I consider worth researching more.

Also, the primary beneficiaries of GiveWell-recommended charities are mostly infants and children, who eat less.

In case anyone is interested, I also have:

  1. unpublished material on the conscious subsystems hypothesis, related to neuron count measures, with my own quantitative models, arguments for my parameter choices/bounds and sensitivity analysis for chickens vs humans. Feel free to message me for access.
  2. arguments for animals mattering a lot in expectation on non-hedonistic stances here and here, distinct from RP's.

Maybe you can turn this into a FAQ by pulling out quotes or having an LLM summarize the explanations in your citations? I'm not sure if it's worth the effort, though, because people can just go read the citations.

I'd argue that cheaper higher animal welfare and alternative proteins in X years suggest that interventions will be more cost-effective in X years, which might imply that we should "save and invest" (either literally, in capital, or conceptually, in movement capacity). Do you have any thoughts on that?

 

I agree they could be cheaper (in relative terms), but also possibly far more likely to happen without us saving and investing more on the margin. It's probably worth ensuring a decent sum of money is saved and invested for this possibility, though.

Your 4 priorities seem reasonable to me. I might aim 2, 3 and 4 primarily at potentially extremely high payoff interventions, e.g. s-risks. They should beat 1 in expectation, and we should have plausible models for how they could.

It seems likely to me that donation opportunities will become less cost-effective over time, as problems become increasingly solved by economic growth and other agents. For example, the poorest people in the future will be wealthier and better off than the poorest people today. And animal welfare in the future will be better than today (although things could get worse before they get better, especially for farmed insects).

Thanks for writing this!

 

What works today may be obsolete tomorrow

I'd like to reinforce and expand on this point. I think it pushes us towards interventions that benefit animals earlier or with potentially large lasting counterfactual impacts through an AI transition. If the world or animal welfare donors specifically will be far wealthier in X years, then higher animal welfare and satisfying alternative proteins will be extremely cheap in relative terms in X years and we'll get them basically for free, so we should probably severely discount any potential counterfactual impacts past X years.

I would personally focus on large payoffs within the next ~10 years and maybe work to shape space colonization to reduce s-risks, each when we're justified in believing the upsides outweigh the backfire risks, in a way that isn't very sensitive to our direct intuitions.

I'm not sure it needs a whole other large project, especially one started from scratch. You could just have a few people push further on these points, which seem like the most likely cruxes:

  1. Further developing and defending measures that scale with neuron counts.
  2. Assessing animals on normative stances besides expectational hedonistic utilitarianism.
  3. Defending less animal-friendly responses to the two envelopes problem (see prior writing and the comments here, here, here, here, here and here).
  4. EDIT, also: Assessing the probability that invertebrates of interest (and perhaps other animals of interest) can experience excruciating or unbearable pain, as effectively all-consuming pain an animal would be desperate and take incredible risks to avoid.

And then have them come up with their own models and estimates. They could mostly rely on the studies and data RP collected on animals, although they could check the ones that seem most cruxy, too.

Against option 3, you write:

There are many different ways of carving up the set of “effects” according to the reasoning above, which favor different strategies. For example: I might say that I’m confident that an AMF donation saves lives, and I’m clueless about its long-term effects overall. Yet I could just as well say I’m confident that there’s some nontrivially likely possible world containing an astronomical number of happy lives, which the donation makes less likely via potentially increasing x-risk, and I’m clueless about all the other effects overall. So, at least without an argument that some decomposition of the effects is normatively privileged over others, Option 3 won’t give us much action guidance.

Wouldn't you also say that the donation makes these happy lives more likely on some elements of your representor via potentially increasing x-risk? So then they're neither made determinately better off nor determinately worse off in expectation, and we can (maybe) ignore them.

Maybe you need some account of transworld identity (or counterparts) to match these lives across possible worlds, though.

I haven't read much of this post, so just call me out if this is totally off base, but I suspect you're treating events as more "independent" than you should.

Relevant: A nuclear war forecast is not a coin flip by David Johnston.

I also illustrated in a comment there:

On the other extreme, we could imagine repeatedly flipping a coin with only heads on it, or a coin with only tails on it, but we don't know which, but we think it's probably the one only with heads. Of course, this goes too far, since only one coin flip outcome is enough to find out what coin we were flipping. Instead, we could imagine two coins, one with only heads (or extremely biased towards heads), and the other a fair coin, and we lose if we get tails. The more heads we get, the more confident we should be that we have the heads-only coin.

To translate this into risks, we don't know what kind of world we live in and how vulnerable it is to a given risk, and the probability that the world is vulnerable to the given risk at all an upper bound for the probability of catastrophe. As you suggest, the more time goes on without catastrophe, the more confident we should be that we aren't so vulnerable.

Load more