Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.
I've also done economic modelling for some animal welfare issues.
Want to leave anonymous feedback for me, positive, constructive or negative? https://www.admonymous.co/michael-st-jules
Here's how I'd think about "4 My argument", in actualist preference-affecting terms[1]:
My preferences will differ between pressing button 1 and not pressing button 1, because my preferences track the world's[2]Â preferences[3], and the world's preferences will differ by Frank's. Then:
Â
See this piece for more on actualist preference-affecting views.
Past, current and future, or just current and future.
Or also desires, likes, dislikes, approval, disapproval, pleasures in things, displeasures in things, evaluative attitudes, etc., as in Really radical empathy.
I'd guess this is pretty illustrative of differences in how we think about person-affecting views, and why I think violations of "the independence of irrelevant alternatives" and "Losers Can Dislodge Winners" are not a big deal:
Narrow views imply that in the choice between 1 and 2, you can choose either. But why in the world would adding 3, which is itself impermissible to take, affect that?
Run through the reasoning on the narrow view with and without 3 available and compare them. The differences in reasoning, ultimately following from narrow person-affecting intuitions, are why. So, the narrow person-affecting intuitions explain why this happens. You're asking as if there's no reason, or no good reason. But if you were sufficiently sympathetic to narrow person-affecting intuitions, then you'd have good reasons: those narrow person-affecting affecting intuitions and how you reason with them.
(Not that you referred directly to "the independence of irrelevant alternatives" here, but violation of it is a common complaint against person-affecting views, so I want to respond to that directly here.) 3 is not an "irrelevant" alternative, because when it's available, we see exactly how it's relevant when it shows up in the reasoning that leads us to 2. I think "the independence of irrelevant alternatives" has a misleading name.
Adding some other choice you’re not allowed to take to an option set shouldn’t make you no longer allowed to choose a previously permissible option. This would be like if you had two permissible options: saving a child at personal risk, or doing nothing, and then after being offered an extra impermissible option (shooting a different child), it was no longer permissible to do nothing. WTF?
And then this to me seems disanalogous, because you don't provide any reason at all for how the third option would change the logic. We have reasons in the earlier hypothetical.
I think we have very different intuitions.
I don't think giving up axiology is much or any bullet to bite, and I find the frameworks I linkedÂ
The problems with axiology also seem worse to me, often as a consequence of failing to respect what individuals (would) actually care about and so failing at empathy, one way or another, as I illustrate in my sequence.
Giving up axiology to hold on to a not even very widely shared intuition?
What do you mean to imply here? Why would I force myself to accept axiology, which I don't find compelling, at the cost of giving up my own stronger intuitions?
And is axiology (or the disjunction of conjunctions of intuitions from which it would follow) much more popular than person-affecting intuitions like the Procreation Asymmetry?
Â
Giving up the idea that the world would be better if it had lots of extra happy people and every existing person was a million times better?
I think whether or not a given person-affecting view has to give that up can depend on the view and/or the details of the hypothetical.
At a basic level better, not necessarily the things they care about by derivation from other things they care about, because they can be mistaken in their derivations.
Moral realism, that there's good or bad independently of individuals' stances (or evaluative attitudes, as in my first post) seems to me to be a non-starter. I've never seen anything close to a good argument for moral realism, maybe other than epistemic humility and wagers.
The versions of person-affecting views that are to me best motivated, most intuitive and most plausible let go of axiology as normally conceived. They don't have a single privileged "objective" impartial order over possible outcomes, and they violate the independence of irrelevant alternatives, but there are straightforward explanations for how those alternatives come to not actually be irrelevant. See:
It does look like most studies suggested small or no effects after less than 10 meters away, but I wonder how much they focused on eggs, larvae and zooplankton, which are plausibly more sensitive. For example, from this study (discussion):
Experimental air gun signal exposure decreased zooplankton abundance when compared with controls, as measured by sonar (~3–4 dB drop within 15–30 min) and net tows (median 64% decrease within 1 h), and caused a two- to threefold increase in dead adult and larval zooplankton. Impacts were observed out to the maximum 1.2 km range sampled, which was more than two orders of magnitude greater than the previously assumed impact range of 10 m. Although no adult krill were present, all larval krill were killed after air gun passage.
This might be an outlier study, though. I had Perplexity attempt a systematic review here.
I'd guess aquatic noise reduces populations across trophic levels. There's some evidence across different animal size groups. It also seems a priori more likely that smaller animals, like zooplankton, will have the largest population-relative direct effects at a given noise volume because the force will be larger relative to their body and organ sizes (but they may be closer or farther from noise sources), and since they feed other animals up the food chain, higher trophic levels would have less available food, too.
Speculating pretty wildly: maybe larger primarily herbivorous species would be least affected or could increase in populations due to reduced competition for food with smaller herbivores. That could actually mean increasing the average welfare of aquatic animals.
Figured I'd flag that it seems pretty likely to me that aquatic noise reduces populations (and unlikely that it increases them), both fish and invertebrates, by increasing mortality and reducing fertility. See this thread with Perplexity AI (I haven't carefully verified the accuracy, though). So, reducing aquatic noise increases wild animal populations in expectation, which is plausibly bad if their lives are bad on average. There could be tradeoffs with average welfare, but I'd be at best clueless about whether reducing aquatic noise is good or bad overall for aquatic animals in the near term.
I suspect we need to involve our criteria for defining and picking bracketings here.
In practice, I think it doesn't make sense to just bracket in the bad long-term effects or just bracket in the good ones. You might be able to carve out bracketings that include only bad (or only good) long-term effects and effects outweighed by them, but not all bad (or all good) long-term effects. But that will depend on the particulars.
I think if we only do spatiotemporal bracketing, it tells us to ignore the far future and causally inaccessible spacetime locations, because each such location is made neither determinately better off in expectation nor determinately worse off in expectation. I'm not entirely sure where the time cutoff should start in practice, but it would be related to AGI's arrival. That could make us neartermist.
But we may also want to bracket out possibilities, not just ST locations. Maybe we can bracket out AGI by date X, for various X (or the min probability of it across choices, in case we affect its probability), and focus on non-AGI outcomes we may be more clueful about. If we bracket out the right set of possibilities, maybe some longtermist interventions will look best.
My sense is that if you're weighing nematodes, you should also consider things like conscious subsystems or experience sizes that could tell you larger-brained animals have thousands or millions of times more valenced experiences or more valence at a time per individual organism. For example, if a nematode realizes some valence-generating function (or indicator) once with its ~302 neurons, how many times could a chicken brain, with ~200 million neurons, separately realize a similar function? What about a cow brain, with 3 billion neurons?
Taking expected values over those hypotheses and different possible scaling law hypotheses tends, on credences I find plausible, to lead to expected moral weights scaling roughly proportionally with the number of neurons (see the illustration in the conscious subsystems post). But nematodes (and other wild invertebrates) could still matter a lot even on proportional weighing, e.g. as you found here.
Of interest, see the comments on Thornley's EA Forum post. I and others have left multiple responses to his arguments.