Philosophy, global priorities and animal welfare research. My current specific interests include: philosophy of mind, moral weights, person-affecting views, preference-based views and subjectivism, moral uncertainty, decision theory, deep uncertainty/cluelessness and backfire risks, s-risks, and indirect effects on wild animals.
I've also done economic modelling for some animal welfare issues.
Want to leave anonymous feedback for me, positive, constructive or negative? https://www.admonymous.co/michael-st-jules
Where bliss or joy shout “more please,” equanimity has a subtle draw that made me go on retreat a second time but didn’t have me cling to it.
Interestingly, I think the "more please" or craving is dissociable from pleasure, even if not in typical cases. From my piece Pleasure and suffering are not conceptual opposites:
Pleasure and unpleasantness need not involve desire, at least conceptually, and it seems pleasure at least does not require desire in humans. Desire, as motivational salience, depends on brain mechanisms in animals distinct from those for pleasure, and which can be separately manipulated (Berridge, 2018, Nguyen et al., 2021, Berridge & Dayan, 2021), including by reducing desire (incentive salience) without also reducing drug-induced euphoria (Leyton et al., 2007, Brauer & H De Wit, 1997). Berridge and Kringelbach (2015) summarize the last two studies as follows:
human subjective ratings of drug pleasure (e.g., cocaine) are not reduced by pharmacological disruption of dopamine systems, even when dopamine suppression does reduce wanting ratings (Brauer and De Wit, 1997, Leyton et al., 2007)
On the other hand, in humans and other animals, the aversive salience of physical pain may not be empirically separable from its unpleasantness (Shriver, 2014), but as far as I can tell, the issue is not settled.
The first group of people are wrong because the probability that you personally avert AI catastrophe isn’t that small.
What do you estimate it to be, given all of the other actors in the space focused on this binary outcome?
Also, how high should the probability difference be for you to think devoting your career to it makes sense, rather than taking minimal precautions with low opportunity costs, like how we think about seatbelts and insurance against very unlikely events?
I'm more inclined towards functionalist interpretations of welfare, on which something like relative functional significance determines welfare levels. E.g. something's attention-grabbing capacity helps to determine its welfare significance. In that case, you might be deeply skeptical that small animals have the right functional role at all, but once you grant they do, it is much more plausible that welfare ranges are similar to humans.
One possibility for attention-grabbing: beings' welfare ranges may be proportional to how much attention they have to grab, and beings with richer/more detailed experiences could have more units of attention to be grabbed, with an analogy between the number of details in a visual field like the number of pixels in a computer screen. That being said, I'm not sure it's any less valid for it to be independent of the number of possible separate elements in conscious attention at a time, and I suspect it's just a matter of normative interpretation, not a matter of empirical fact.
I also think there are degrees to which something is an attentional mechanism at all or has a given functional role, that could have normative significance, and it's unlikely that there's an objective fact of the matter about how we should weigh these degrees. See my piece Gradations of moral weight, basically another two envelopes problem.
Some thoughts about using the "random option" as the default:
Obviously dividing your time totally randomly into tiny non-contiguous units is horrible for actually achieving anything. Maybe we just combine them into bigger contiguous blocks by assumption. Or we allow some kind of cooperation or positive-sum trades that will often in practice lead to contiguous blocks of time.
Do you (Michael) see your views about precise and imprecise credences significantly affecting what you would actually do in the real world in a scenario where you had to blame Jones or Smith?
Probably not. I see it as more illustrative of important cases. Imagine instead it's between supporting an intervention or not, and it has similar complexity and considerations going in each direction.
More relevant examples to us could be: crops vs nature for wild animals, climate change on wild animals, fishing on wild animals, the far future effects of our actions, the acausal influence of our actions. These are all things I feel clueless enough about to mostly bracket away and ignore when they are side effects of direct interventions I'm interested in supporting. I'm not ignoring them because I think they're small. I think they are likely much larger than the effects I'm not ignoring.
I may also want to further study some of them, but I'm often not that optimistic about making much progress (especially for far future effrcts and acausal influence) and for that progress to be used in a way that isn't net negative overall by my lights.
If I asked you to actually decide who's more likely to be the culprit, how would you do it?
What do you do if you don't have reference class information for each part of the problem? How do you weigh the conflicting evidence? I'm imaginging that at many steps, you'd have to rely on direct impressions or numbers that just came to mind.
Would you feel like whatever came out was very arbitrary and depended too much on direct impressions or numbers that just came to mind? Would you actually believe and endorse what came out? Would you defend it to other people?
Some questions here are whether 50-50 as precise probabilities to start is reasonable and whether the approach to assign 50-50 as precise probabilities is reasonable.
If, when looking at the scenario, you would have done something like "wow, that's so complicated and I'm clueless, so 50-50", then your reaction almost certainly would have been the same if the example originally included one extra eyewitness in favour of one side. But then this tells you your initial way to assign credences was insensitive to this small difference. And yet after the initial assignment, you say it should be sensitive.
Or, if you forgot your initial judgement or the number of eyewitnesses and was just given the total and looked at the situation with fresh eyes, you'd come up with 50-50 again.
Alternatively, you could build a precise probability distribution as a function of the evidence that weighs it all, but this would be very sensitive to arbitrary choices.
In some cases, we can't gather strong enough evidence, say because:
In such cases, I think imprecise probabilities are the way to go to reduce arbitrariness. We can do sensitivity analysis. If whether the intervention looks good or bad overall depends highly on fairly arbitrary judgements or priors, we might disprefer it and prefer to support things that are more robustly positive. This is difference-making ambiguity aversion.
And/or we do can some kind of bracketing.
Also, you should think of research as an intervention itself that could backfire. Who could use the research, and could they use it in ways you'd judge as very negative? How likely is that? This will of course depend on the case and your own specific views.
Hmm, the view in my sequence Radical empathy is consequentialist-compatible and judgemental, but is designed to judge exactly as others judge, on their behalf.