Currently working on psychological questions related to minimalist axiologies and on reasons to be careful about the practical implications of abstract formalisms.
I have MA and BA degrees in psychology, with minors in mathematics, cognitive science, statistics, computer science, and analytic philosophy.
Thanks for the screencast. I listened to it — with a ‘skip silence’ feature to skip the typing parts — instead of watching, so I may have missed some points. But I’ll comment on some points that felt salient to me. (I opt out of debating due to lack of time, as it seems that we may not have that many relevantly diverging perspectives to try to bridge.)
Error One
I didn't read linked material to try to clarify matters, except to notice that this linked paper abstract doesn't use the word "quality". I think, for this issue, the article should stand on its own OK rather than rely on supplemental literature to clarify this.
Good catch; the rough definition that I used for Archimedean views — that “quantity can always substitute for quality” — was actually from this open access version.
Whether qualitative differences exist and matter and are strict is one issue, and whether many small quantities can add together to equal a large quantity is a separate issue (though the issues are related in some ways). So I think there's some confusion or lack of clarity about this.
Here, the main point (for my examination of Archimedean and lexical views) is just that Archimedean views always imply the “can add together” part (i.e. aggregation & outweighing), and that Archimedean views essentially deny the existence of any “strict” morally relevant qualitative differences over and above the quantitative differences (of e.g. two intensities of suffering). By comparison, lexical views can entail that two different intensities of suffering differ not only in terms of their quantitative intensity but also in terms of a strict moral priority (e.g. that any torture is worse than any amount of barely noticeable pains, all else equal).
Error Two [+ Offsetting and Repugnance + Bonus Comments on Offsetting]
I agree that money and debt are good examples of ‘positive’ and ‘negative’ values that can sometimes be aggregated in the way that offsetting requires; after all, it seems reasonable for some purposes to model debt as negative money. We also seem to agree that ‘happiness’ or ‘positive welfare’ is not ‘negative suffering’ in this sense (cf. Vinding, 2022).
Re: “I figure most people also disagree with suffering offsetting” — I wish but am not sure this is true. But perhaps most people also haven’t deeply considered what kind of impartial axiology they would reflectively endorse.
Re: “offsetting in epistemology” — interesting points, though I’m not immediately sold on the analogy. :) (Of course, you don’t claim that the analogy is perfect; “there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field”.)
My impression is that population axiology is widely seen as a ‘pick your poison’ type of choice in which each option has purportedly absurd implications and then people pick the view whose implications seem to them intuitively the least ‘absurd’ (i.e. ‘repugnant’). And, similarly, if/when people introduce e.g. deontological side-constraints on top of a purely consequentialist axiology, it seems that one can model the epistemological process (of deciding whether to subscribe to pure consequentialism or to e.g. consequentialism+deontology) as a process of intuitively weighing up the felt ‘absurdity’ (‘repugnance’) of the implications that follow from these views. (Moreover, one could think of the choice criterion as just “pick the view whose most repugnant implication seems the least repugnant”, with no offsetting of repugnance.)
I would think that my post does not necessarily imply an offsetting view in epistemology. After all, when I called my conclusion — i.e. that “the XVRCs generated by minimalist views are consistently less repugnant than are those generated by the corresponding offsetting views” — “a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation”, this doesn’t need to imply that these XVRC comparisons would “offset” any intuitive downsides of minimalist views. All it says, or is meant to say, is that the offsetting XVRCs are comparatively worse. Of course, one may question (and I imagine you would :) whether these XVRC comparisons are the most relevant — or even a relevant — consideration when deciding whether to endorse an offsetting or a minimalist axiology.
Error Three [i.e. the related issues that you raised under that heading]
Re: framing about why this matters — the article begins with the hyperlinked claim that “Population axiology matters greatly for our priorities.” It’s also framed as a response to the XVRC article by Budolfon and Spears, so I trust that my article would be read mostly by people who know what population axiology is and why it matters (or quickly find out before reading fully). I guess an article can only be read independently of other sources after people are first sufficiently familiar with some inevitable implicit assumptions an article makes. (On the forum, I also contextualize my articles with the tags, which one can hover over for their descriptions.)
To say that population axiology doesn’t particularly matter seems like a strong claim given that the field seems to influence people’s views on the (arguably quite fundamentally relevant) question of what things do or don’t have intrinsic (dis)value. But I might agree that the field “is confused” given that so much of population axiology entails assumptions, such as Archimedean aggregative frameworks, that often seem to get a free pass without being separately argued for at all.
Regarding the implicit assumptions of population axiology — and re: my not mentioning political philosophy (etc.) — I would note that the field of population axiology seems to be about ‘isolating’ the morally relevant features of the world in an ‘all else equal’ kind of comparison, i.e. about figuring out what makes one outcome intrinsically better than another. So it seems to me that the field of population axiology is by design focused on hard tradeoffs (thus excluding “win/win approaches”) and on “out of context” situations, with the latter meant to isolate the intrinsically relevant aspects of an outcome and exclude all instrumental aspects — even though the instrumental aspects may in practice be more weighty, which I also explore in the series.
One could think of axiology as the theoretical core question of what matters in the first place, and political philosophy (etc.) as the practical questions of how to best organize society around a given axiology or a variety of axiologies interacting / competing / cooperating in the actual complex world. (When people neglect to isolate the core question, I would argue that people often unwittingly conflate intrinsic with instrumental value, which also seems to me a huge flaw in a lot of supposedly isolated thought experiments because these don’t take the isolation far enough for our practical intuitions to register what the imagined situations are actually supposed to be like. I also explored these things earlier in the series.)
[the etymology of value lexicality (‘superiority’)]
My attempt to answer this was actually buried in footnote 8 :) > “Lexicographic preferences” seem to be named after the logic of alphabetical ordering. Thus, value entities with top priority are prioritized first regardless of how many other value entities there are in the “queue”.
[whether a more fitting name for minimalist axiologies is ‘minimalist axiologies’ or ‘minimizing axiologies’]
I think ‘minimalist’ does also work in the other evoked sense that you mentioned, because it seems to me that offsetting axiologies add further assumptions on top of those that are entailed by the offsetting and the minimalist axiologies. For example, my series tends to explore welfarist minimalist axiologies that assume only some single disvalue (such as suffering, or craving, or disturbance), with no second value entity that would correspond to a positive counterpart to this first one (cf. Vinding, 2022). By comparison, offsetting axiologies such as classical utilitarianism are arguably dualistic in that they assume two different value entities with opposite signs. And monism is arguably a theoretically desirable feature given the problem of value incommensurability between multiple intrinsic (dis)values.
(Thanks also for the comments on upvote norms. I agree with those. Certainly one shouldn’t be unthinkingly misled into assuming that the community wants to see more of whatever gets upvoted-without-comment, because the lack of comments may indeed reflect some problems that one would ideally fix so as to make things easier to more deeply engage with.)
Sounds interesting. Can we submit our own writing? If so, I'm curious what might be important errors in this post.
Relevant recent posts:
https://www.simonknutsson.com/undisturbedness-as-the-hedonic-ceiling/
https://centerforreducingsuffering.org/phenomenological-argument/
(I think these unpack a view I share, better than I have.)
Edit: For tranquilist and Epicurean takes, I also like Gloor (2017, sec. 2.1) and Sherman (2017, pp. 103–107), respectively.
To modify the monk case, what if we could (costlessly; all else equal) make the solitary monk feel a notional 11 units of pleasure followed by 10 units of suffering?
Or, extreme pleasure of "+1001" followed by extreme suffering of "-1000"?
Cases like these make me doubt the assumption of happiness as an independent good. I know meditators who claim to have learned to generate pleasure at will in jhana states, who don't buy the hedonic arithmetic, and who prefer the states of unexcited contentment over states of intense pleasure.
So I don't want to impose, from the outside, assumptions about the hedonic arithmetic onto mind-moments who may not buy them from the inside.
Additionally, I feel no personal need for the concept of intrinsic positive value anymore, because all my perceptions of positive value seem just fine explicable in terms of their indirect connections to subjective problems. (I used to use the concept, and it took me many years to translate it into relational terms in all the contexts where it pops up, but I seem to have now uprooted it so that it no longer pops to mind, or at least it stopped doing so over the past four years. In programming terms, one could say that uprooting the concept entailed refactoring a lot of dependencies regarding other concepts, but eventually the tab explosion started shrinking back down again, and it appeared perfectly possible to think without the concept. It would be interesting to hear whether this has simply "clicked" for anyone when reading analytical thought experiments, because for me it felt more like how I would imagine a crisis of faith to feel like for a person who loses their faith in a <core concept>, including the possibly arduous cognitive task of learning to fill the void and seeing what roles the concept played.)
I kindly ask third parties to be mindful of the following points concerning the above reply.
(1)
(1) + (2)
(2)
Would an agent who accepted strong pessimism [i.e. the view that there are no independent goods]—which I absolutely believe we should reject—have most reason to end their own life? Not necessarily. An altruistic agent with this evaluative outlook would have strong instrumental reason to remain alive, in order to alleviate the suffering of others.
I agree that life can be worth living for our positive roles in terms of reducing overall suffering or dukkha. More than that, such a view seems (to me at least) like a perfectly valid view on what constitutes evaluative meaning and positive value.
Indeed, if I knew for a fact that my life were overall (hopelessly) increasing suffering or dukkha, then this would seem to me like a strong reason not to live it, regardless of what I get to experience. So I'm curious how the author has come to believe that we should absolutely reject this view in favor of, presumably, offsetting views.
However, such an agent would be forced to accept the infamous null-bomb implication, which says that the best thing to do would be to permanently destroy all sentient life in the universe. I join almost every other philosopher in taking the fact that an ethical theory accepts the null-bomb implication as a decisive reason to reject the theory (as not merely misguided, but horrifically so).
To properly consider such a theoretical reductio, I trust that most philosophers would agree (on reflection) that we need to account for potential confounders such as status quo bias, omission bias, self-serving bias, and whether alternative views have any less horrific theoretical implications.
In particular, offsetting views theoretically imply things like the “Very Repugnant Conclusion”, “Creating Hell to Please the Blissful”, and “Intense Bliss with Hellish Cessation”, none of which seems to me any less horrific than does the non-creation of an imperfect world (cf. the consequentialist equivalence of cessation and non-creation).
Are these decisive reasons to reject offsetting views? A proponent of such views could still argue that such implications are only theoretical, that we shouldn't let them (mis)guide us in practice, and that the practical implications of impartial consequentialism are a separate question.
Yet the quoted passage neglects to mention that the very same response applies to minimalist consequentialism (whose proponents take pains to practically highlight the importance of cooperation, the avoidance of accidental harm, and the promotion of nonviolence).
I would just generally caution against performing such theoretical reductios so hastily. After all, a more bridge-building and illuminating approach is to consider the confounding factors and intuitions behind our differing perceptions on such questions, which I hope we can all do to better understand each other's views.
Thanks for compiling this! The structure feels very approachable. The bar for engagement is also greatly lowered by your inclusion of the recap, the comparison of theories, and the pointers for discussion and feedback.
Regarding the linked sections, the strongest consensus about the definition of flourishing indeed seems to involve an emphasis on relationships, purpose, and meaning. To me, this emphasis seems to be in tension with the tendency of standard (welfarist) population ethics to only count welfare as a kind of isolated "score" that applies to each life under the (radical) assumption of "all else being equal".
Specifically, perhaps none of the popular notions of flourishing is even possible to actualize in an "all else equal" life. After all, those notions seem to depend (at least partly, if not fully) on our life making a positive difference for others. For me, the centrality of such a causal link back to others casts doubt on the concept of 'flourishing lives' as something that could be mass-produced to independently improve the overall value of the world (contra arguments such as astronomical waste / Bostrom, 2003).
In other words, I think a perfectly valid rejection of the experience machine is to say that entering the machine would sever the essential causal connections of what positive roles we play for how others feel, which seems central to many if not all definitions of flourishing (i.e. the kind of life that we want ours to become).
—
So I'm curious what you, or the reviewed theorists, might say about:
1. Is flourishing even possible "all else being equal", such as in an experience machine?
2. Relatedly: To what degree does flourishing refer to positive intrinsic vs. extrinsic value?
—
(For my own take, there's e.g. the brief section on "self-contained versus relational flourishing". Worth noting is also that a relational i.e. extrinsic notion of flourishing is perfectly compatible with minimalist theories of welfare, such as the Buddhism-inspired views of antifrustrationism by Fehige, 1998 and tranquilism by Gloor, 2017, which work without needing the assumption of intrinsic positive value at all.
They essentially say that, "all else equal", we are just as well off by satisfying a desire or an unmet need as we would be by letting go of it. Yet a minimalist notion of flourishing would highlight the importance of seeking to satisfy [rather than letting go of] our desires whenever doing so is aligned with making an overall positive difference for others. This we cannot do in an experience machine — nor in population ethics — where such flourishing is impossible, but can do all the time in daily life where other things are never completely unaffected by our actions.)
Offtopic: This post was a joy to read. Would love to read if you have any thoughts to share about writing in general, but no worries if not. :) Welcome to the forum.