Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.
I've also done economic modelling for some animal welfare issues.
Want to leave anonymous feedback for me, positive, constructive or negative? https://www.admonymous.co/michael-st-jules
In case anyone is interested, I also have:
I'd argue that cheaper higher animal welfare and alternative proteins in X years suggest that interventions will be more cost-effective in X years, which might imply that we should "save and invest" (either literally, in capital, or conceptually, in movement capacity). Do you have any thoughts on that?
I agree they could be cheaper (in relative terms), but also possibly far more likely to happen without us saving and investing more on the margin. It's probably worth ensuring a decent sum of money is saved and invested for this possibility, though.
Your 4 priorities seem reasonable to me. I might aim 2, 3 and 4 primarily at potentially extremely high payoff interventions, e.g. s-risks. They should beat 1 in expectation, and we should have plausible models for how they could.
It seems likely to me that donation opportunities will become less cost-effective over time, as problems become increasingly solved by economic growth and other agents. For example, the poorest people in the future will be wealthier and better off than the poorest people today. And animal welfare in the future will be better than today (although things could get worse before they get better, especially for farmed insects).
Thanks for writing this!
What works today may be obsolete tomorrow
I'd like to reinforce and expand on this point. I think it pushes us towards interventions that benefit animals earlier or with potentially large lasting counterfactual impacts through an AI transition. If the world or animal welfare donors specifically will be far wealthier in X years, then higher animal welfare and satisfying alternative proteins will be extremely cheap in relative terms in X years and we'll get them basically for free, so we should probably severely discount any potential counterfactual impacts past X years.
I would personally focus on large payoffs within the next ~10 years and maybe work to shape space colonization to reduce s-risks, each when we're justified in believing the upsides outweigh the backfire risks, in a way that isn't very sensitive to our direct intuitions.
I'm not sure it needs a whole other large project, especially one started from scratch. You could just have a few people push further on these points, which seem like the most likely cruxes:
And then have them come up with their own models and estimates. They could mostly rely on the studies and data RP collected on animals, although they could check the ones that seem most cruxy, too.
Against option 3, you write:
There are many different ways of carving up the set of “effects” according to the reasoning above, which favor different strategies. For example: I might say that I’m confident that an AMF donation saves lives, and I’m clueless about its long-term effects overall. Yet I could just as well say I’m confident that there’s some nontrivially likely possible world containing an astronomical number of happy lives, which the donation makes less likely via potentially increasing x-risk, and I’m clueless about all the other effects overall. So, at least without an argument that some decomposition of the effects is normatively privileged over others, Option 3 won’t give us much action guidance.
Wouldn't you also say that the donation makes these happy lives more likely on some elements of your representor via potentially increasing x-risk? So then they're neither made determinately better off nor determinately worse off in expectation, and we can (maybe) ignore them.
Maybe you need some account of transworld identity (or counterparts) to match these lives across possible worlds, though.
I haven't read much of this post, so just call me out if this is totally off base, but I suspect you're treating events as more "independent" than you should.
Relevant: A nuclear war forecast is not a coin flip by David Johnston.
I also illustrated in a comment there:
On the other extreme, we could imagine repeatedly flipping a coin with only heads on it, or a coin with only tails on it, but we don't know which, but we think it's probably the one only with heads. Of course, this goes too far, since only one coin flip outcome is enough to find out what coin we were flipping. Instead, we could imagine two coins, one with only heads (or extremely biased towards heads), and the other a fair coin, and we lose if we get tails. The more heads we get, the more confident we should be that we have the heads-only coin.
To translate this into risks, we don't know what kind of world we live in and how vulnerable it is to a given risk, and the probability that the world is vulnerable to the given risk at all an upper bound for the probability of catastrophe. As you suggest, the more time goes on without catastrophe, the more confident we should be that we aren't so vulnerable.
FWIW, I wouldn’t consider planktonic animals necessarily brainless or unworthy of moral consideration. Peruvian anchoveta eat krill, which I imagine to be sentient with modest probability, and copepods, which I consider worth researching more.