Did you read any books you would not recommend? Because that would be a useful thing to hear too.
(Also, this list makes me feel bad about my lack of book reading...)
Ah, that's a nice point. I discuss in 5.5 in the paper. Quote:
The final condition is whether different individuals use the same endpoints at a time [. There are two types of concern here. The first is whether there are what Nozick (1974, 41) called ‘utility monsters’, individuals who can and do experience much greater magnitudes of happiness (or any other sort of subjective state), than others.I won’t dwell on this as it seems unlikely there would be substantial differences in humans’ capacities for subjective experiences. Presumably there are evolutionary pressures for each species to have range of sensitivity that is optimal for survival. To return to an example noted earlier, being immune to pain is an extremely problematic condition that would put someone at an evolutionary disadvantage. Further, even if there are differences, we would expect these to be randomly distributed, in which case they would wash out in large samples
The final condition is whether different individuals use the same endpoints at a time [. There are two types of concern here.
The first is whether there are what Nozick (1974, 41) called ‘utility monsters’, individuals who can and do experience much greater magnitudes of happiness (or any other sort of subjective state), than others.
I won’t dwell on this as it seems unlikely there would be substantial differences in humans’ capacities for subjective experiences. Presumably there are evolutionary pressures for each species to have range of sensitivity that is optimal for survival. To return to an example noted earlier, being immune to pain is an extremely problematic condition that would put someone at an evolutionary disadvantage. Further, even if there are differences, we would expect these to be randomly distributed, in which case they would wash out in large samples
So to generate a serious worry that there's a problem at the level of group averages (which is the relevant level for most relevant decision-making) you'd have to argue for and explain the existence of non-trivial difference between groups. It's tricky to think of real life cases outside people who have genetic conditions. But this wouldn't motivate us thinking, say, members of two nations have different capacities.
Just to flag, this topic has been the subject of three recent forum posts in the last 6 months. This paper addresses the concerns raised there.
Milan Griffes asks whether SWB scales might shift over time (intertemporal cardinality) and Fin Moorhouse shared his dissertation on the same topic.
Aidan Goth, in a post which commented on a forum post by the Happier Live Institute ("Using Subjective Well-Being to Estimate the Moral Weights of Averting Deaths and Reducing Poverty”) wonders whether subjective scales are comparable across people (interpersonal cardinality).
In this paper, I argue the scales are likely to be cardinally comparable both over time and across people. This is something of a bold claim to make and, if true, is pretty important, because it means we can basically interpret subjective data at face value, rather than worrying about having to make fancy adjustments based on e.g. the nationality of the respondents.
Yes, these are some of the many things I wish I'd known in advance of starting on the project!
Just on the different effect sizes from different methods, where do/would RCT methods fit in with the four discussed by Kaats?
FWIW, I agree that a meta-analysis of RCTs isn't a like-for-like to a single RCT. That said, when(if?) we exhaust the existing SWB literature relevant t cost-effectiveness we should present everything we find (which shouldn't hard as there's not much!).
Does he have a position on moral uncertainty and, if so, what does he takes implications to be?
"if it is in our power to prevent something bad from happening, without thereby sacrificing anything of comparable moral importance, we ought, morally, to do it"
Hello Jason. Thanks for doing all this work! I haven't kept up with all of it, so apologies if you've covered this elsewhere, but I had a nascent thought that links and challenges your two tentative conclusions.
Okay, so the idea is that valenced states - colloquially, pleasure and pain - provide "oomph" to get creatures to do things. That seems fine. But it's unclear what this tells us about the intensities of experiences. Imagine we have two creatures that are the same, except A has 10x valence intensity than B. Why should there be any difference about how the two of them behave and thus, their evolutionary fitness? Couldn't they just act in the same way? And supposing more oomph is better, how much oomph should we expect, given there are, e.g. energy costs to producing sensations?
From the armchair, what matters for behaviour the relative intensity of different things for a given creature: if the deer loves eating berries and doesn't fear pain enough to run away from wolves, it will get eaten. But that doesn't tell us about inter-creature cardinal intensities.
My thought is something like this. Creatures need a range of cardinal intensities large enough to allow them to choose between all the different behaviours they need to undertake to survive and reproduce. As a toy example, if you only have 3 levels of pleasure - zero, 1, and 2 - but you have very many different choices to make - eat, mate, run away, sleep, etc. - then that's not enough resolution to make decisions. An entity that needs to make more decisions needs a greater range of sensations.
This takes us back, crude, to something about brain size a proxy for valenced states. And the possibility that 'simple' creatures, i.e. those that don't have lots of decisions to make, don't feel very much. I'm not sure where that leaves us in practice.
I don't yet have a strong view on how plausible it is that animal advocacy is a priority for longtermism. However, I think it's worth noting that, if it is, there are probably quite a few other sorts of projects that would qualify using exactly the same arguments.
For instance, at the Happier Lives Institute, we spend a lot of time thinking about best to measure well-being. There's an analogous argument that, if governments had better measures of well-being - e.g. better than GDP - and used them to make public policy decisions, that would have enormously valuable consequences over the long-run. I won't do it here, but the arguments are sufficiently analogous that, in Tobias' post, you could replace "animal advocacy" with "well-being measurement", keep the rest of the text the same and it would still make sense. So perhaps well-being measurement is a plausible longtermist priority too.
Other examples that might work include, just from the top of my head: "democratic institutions", "peace building", "education".
It's not clear to me if the right way to update is (a) all these 'society change' interventions are plausible longterm priorities or (b) none of them are. I lean toward (a), but I'm not very confident.
That's a nice point. What life satisfaction views require more specifically is not just that the entity thinks about its life as a whole, but that it thinks about its life as a whole and makes a judgement about how its life is going overall. It's rather implausible animals do that latter thing, which means they have no well-being on this theory.