TA

Teo Ajantaival

Researcher @ Center for Reducing Suffering
667 karmaJoined Jan 2019Finland

Bio

Working on psychological questions related to minimalist axiologies and on reasons to be careful about the practical implications of abstract formalisms.

I have MA and BA degrees in psychology, with minors in mathematics, cognitive science, statistics, computer science, and analytic philosophy.

Sequences
1

Minimalist Axiologies: Alternatives to 'Good Minus Bad' Views of Value

Comments
67

Topic contributions
7

Greetings. :) This comment seems to concern a strongly NU-focused reading of the nonconsequentialist sections, which is understandable given that NU, particularly, its hedonistic version, NHU, is probably by far the most salient and well-known example of a minimalist moral view.

However, my post’s focus is much broader than that. The post doesn’t even mention NU except in the example given in footnote 2, and is never restricted to NHU (nor to NU of any kind, if the utilitarian part would entail a commitment to additive aggregation). For brevity, many examples were framed in terms of reducing suffering. Yet the points aren’t restricted to hedonistic views, as they would apply also to minimalist moral views with non-hedonistic views of wellbeing. And if we consider only NHU, then the most relevant sections would be the ones on minimalist rule and multi-level consequentialism.

The comment seems to assume that the minimalist versions of {virtue ethics / deontology / social contract theory / care ethics} would have their nonconsequentialist moral reasons grounded in NHU. Yet then they wouldn’t contain genuinely nonconsequentialist elements, but would rather be practical heuristics in the service of NHU. My main point there was that a minimalist moral view could endorse separate moral reasons against engaging in {vice, rights violations, breaking of norms, or uncaring responses}, independent of their effects on conscious experiences. To define the former in terms of the latter would seem to collapse back into welfarism.

I like how the sequence engages with several kinds of uncertainties that one might have.

I had two questions:

1. Does the sequence assume a ‘good minus bad’ view, where independent bads (particularly, severe bads like torture-level suffering) can always be counterbalanced or offset by a sufficient addition of independent goods?

  • (Some of the main problems with this premise are outlined here, as part of a post where I explore what might be the most intuitive ways to think of wellbeing without it.)

2. Does the sequence assume an additive / summative / Archimedean theory of aggregation (i.e. that “quantity can always substitute for quality”), or does it also engage with some forms of lexical priority views (i.e. that “some qualities get categorical priority”)?

The links are to a post where I visualize and compare the aggregation-related ‘repugnant conclusions’ of different Archimedean and lexical views. (It’s essentially a response to Budolfson & Spears, 2018/2021, but can be read without having read them.) To me, the comparison makes it highly non-obvious whether Archimedean aggregation should be a default assumption, especially given points like those in my footnote 15, where I argue/point to arguments that a lexical priority view of aggregation need not, on a closer look, be implausible in theory nor practice:

it seems plausible to prioritize the reduction of certainly unbearable suffering over certainly bearable suffering (and over the creation of non-relieving goods) in theory. Additionally, such a priority is, at the practical level, quite compatible with an intuitive and continuous view of aggregation based on the expected amount of lexically bad states that one’s decisions may influence (Vinding, 2022b, 2022e).

Thus, ‘expectational lexical minimalism’ need not be implausible in theory nor in practice, because in practice we always have nontrivial uncertainty about when and where an instance of suffering becomes unbearable. Consequently, we should still be sensitive to variations in the intensity and quantity of suffering-moments. Yet we need not necessarily formalize any part of our decision-making process as a performance of Archimedean aggregation over tiny intrinsic disvalue, as opposed to thinking in terms of continuous probabilities, and expected amounts, of lexically bad suffering.

The above questions/assumptions seem practically relevant for whether to prioritize (e.g.) x-risk reduction over the reduction of severe bads / s-risks. However,  it seems to me that these questions are (within EA) often sidelined, not deeply engaged with, or are given strong implicit answers one way or another, without flagging their crucial relevance for cause prioritization.

Thus, for anyone who feels uncertain about these questions (i.e. resisting a dichotomous yes/no answer), it could be valuable to engage with them as additional kinds of uncertainties that one might have.

Related:

  • Reply by Vinding (2022)

Perhaps see also:

It seems like in terms of extending lives minimalist views have an Epicurean view of the badness of death / value of life? The good of saving a life is only the spillovers (what the person would do to the wellbeing of others, the prevented grief, etc).

Solely for one's own sake, yes, I believe that experientialist minimalist views generally agree with the Epicurean view of the badness of death. But I think it's practically wise to always be mindful of how narrow the theoretical, individual-focused, 'all else equal' view is. As I note in the introduction,

in practice, it is essential to always view the narrow question of ‘better for oneself’ within the broader context of ‘better overall’. In this context, all minimalist views agree that life can be worth living and protecting for its overall positive roles.

I also believe that exp min views formally agree with the meaning of your second sentence above (assuming that the "etc" encompasses the totality of the positive roles of the lives saved and of the saving itself). But perhaps it might be slightly misleading to say that the views imply that the goodness of lifesaving would be "only the spillovers" (🙂), given that the positive roles could be practically orders of magnitude more significant than what suffering the life would cause or contain. This applies of course also in the other direction (cf. the 'meat-eater problem' etc.). But then we may still have stronger (even if highly diffuse) instrumental reasons to uphold or avoid eroding impartial healthcare and lifesaving norms, which could normatively support extending also those lives whose future effects would look overall negative on exp min wellbeing views.

Additionally, whether or not we take an anthropocentric or an antispeciesist view, a separate axis still is whether the view is focused mainly on severe bads like torture-level suffering (as my own view tends to be). On such severe bad-focused views, one could roughly say that it's always good to extend lives if their total future effects amount to a "negative torture footprint" (and conversely that the extension of lives with a positive such footprint might be overall bad, depending still on the complex value of upholding/eroding positive norms etc.).

(For extra-experientialist minimalist views, it's not clear to what degree they agree with the Epicurean view of death. That class of views is arguably more diverse than are exp min views, with some of the former implying that a frustrated preference to stay alive, or a premature death, could itself be a severe bad — potentially a worse bad than what might otherwise befall one during one's life. It depends on the specific view and on the individual/life in question.)

If we narrow the scope to improving existing lives, is the general conclusion of minimalist wellbeing theories that we should deliver interventions that prevent/reduce suffering rather than add wellbeing? 

Strictly and perhaps pedantically speaking, theories of wellbeing alone don't imply any particular actions in practice, since the practical implications will also depend on our normative views which many people might consider to be separate from theories of wellbeing per se.

But yeah, if one construes "adding wellbeing" as something that cannot be interpreted as "reducing experiential bads" (nor as reducing preference frustration, interest violations, or objective list bads), I guess it makes sense to say that minimalist wellbeing theories would favor interventions whose outcomes could be interpreted in the latter terms, such as preventing/reducing suffering rather than adding wellbeing as a 'non-relieving good'.

Regarding the existing measures of 'life satisfaction' (and perhaps how to reinterpret them in minimalist terms), I should first note that I'm not very familiar with how they're operationalized. But my hunch is that they might easily measure more of an 'outside view' of one's entire life — as if one took a 3rd person, aggregative look at it — rather than a more direct, 'inside view' of how one feels in the present moment. And I think that at least for the experientialist minimalist views that were explored in the post, it might make more sense to think of such views as being focused on the inside view, i.e. on the momentary quality of one's experiential state (which is explicitly the focus in tranquilism).

A problem with the 'outside view' could be that perhaps it becomes cognitively/emotionally inaccessible to us how we actually felt during times where we might have given a life satisfaction rating of 0/10 (or -5/10, or just a very "low" score), and thus we might effectively ignore their subjective weight (at the time) if we later attempt to aggregate over the varying degrees of frustration/satisfaction during our entire life. And if we as researchers care about how minimalist views would estimate the value of some wellbeing interventions, it's worth noting that people with minimalist intuitions often see a theoretical or practical priority to reduce/prevent the most subjectively bad experiences. So perhaps a better practical wellbeing measure for (experientialist) minimalist views would be something like experience sampling — ideally such that it would capture how much people in fact appreciate the contrast in moving up from the lowest scores (and not only the perhaps relatively 'non-relieving' movement from 7–8, 8–9, or 9–10).

Thanks, and no worries about the scope! Others may know better about the practical/quantification questions, but I'll say what comes to mind.

1. Rather than assuming positive units, one could interpret wellbeing changes in comparative terms (of betterness/worseness), which don't presuppose an offsetting view. For some existing measures, perhaps this would be only a matter of reinterpreting the data. A challenge would be how to account for the relational value of e.g. additional life years, given that experientialist minimalist views wouldn't consider them an improvement in wellbeing solely for one's own sake (all else equal). This raises the complex question of how to estimate the value of years added to the life of people who don't live for their own sake; presumably the narrow, individual-focused approach wouldn't see it as an improvement (in exp min terms), but then I'd probably search for less narrow approaches in practice.

a. Depends on how they're defined. Purely suffering-focused views would be minimalist. Other suffering-focused views could allow offsetting in some cases. Prioritarianism could mean that we prioritize helping the worst off, but need not specify what counts as helping; for instance, it could still count the addition of 'non-relieving goods' as a form of helping that simply ought to go to the worst off first.

b. Sure, though I guess we could then raise them another life whose moments of unbearable agony are supposedly just barely outweighed by its other moments after accounting for the discounts. (At least for me, the common theme in why I tend to find such implications problematic seems to relate to the offsetting premise itself, namely to how the moments of subjectively unbearable agony presumably don't agree with it.)

2. Perhaps the key difference is that minimalist preferentialism would equate complete preference satisfaction with "0%" preference frustration, whereas offsetting preferentialism would count (at least some) satisfied preferences as somehow positively good beyond their being 0% frustrated. The latter raises the problems of treating preference satisfaction as an independent good that could offset frustration. (Cf. "Making desires satisfied, making satisfied desires" by Dietz, 2023, e.g. the cases in section 2.3.)

Maps are great!

I also love the Maps of Science, by Dominic Walliman (@Domain of Science): https://youtube.com/playlist?list=PLOYRlicwLG3St5aEm02ncj-sPDJwmojIS

Offtopic: This post was a joy to read. Would love to read if you have any thoughts to share about writing in general, but no worries if not. :) Welcome to the forum.

Just listened to it! The pleasant and thoughtful narration by Adrian Nelson felt perfect for the book. I might even recommend the audiobook version over the text version to people who might otherwise find it distressing to think about s-risks. :)

Thanks for the screencast. I listened to it — with a ‘skip silence’ feature to skip the typing parts — instead of watching, so I may have missed some points. But I’ll comment on some points that felt salient to me. (I opt out of debating due to lack of time, as it seems that we may not have that many relevantly diverging perspectives to try to bridge.)

 

Error One

I didn't read linked material to try to clarify matters, except to notice that this linked paper abstract doesn't use the word "quality". I think, for this issue, the article should stand on its own OK rather than rely on supplemental literature to clarify this.

Good catch; the rough definition that I used for Archimedean views — that “quantity can always substitute for quality” — was actually from this open access version.

Whether qualitative differences exist and matter and are strict is one issue, and whether many small quantities can add together to equal a large quantity is a separate issue (though the issues are related in some ways). So I think there's some confusion or lack of clarity about this.

Here, the main point (for my examination of Archimedean and lexical views) is just that Archimedean views always imply the “can add together” part (i.e. aggregation & outweighing), and that Archimedean views essentially deny the existence of any “strict” morally relevant qualitative differences over and above the quantitative differences (of e.g. two intensities of suffering). By comparison, lexical views can entail that two different intensities of suffering differ not only in terms of their quantitative intensity but also in terms of a strict moral priority (e.g. that any torture is worse than any amount of barely noticeable pains, all else equal).

 

Error Two [+ Offsetting and Repugnance + Bonus Comments on Offsetting]

I agree that money and debt are good examples of ‘positive’ and ‘negative’ values that can sometimes be aggregated in the way that offsetting requires; after all, it seems reasonable for some purposes to model debt as negative money. We also seem to agree that ‘happiness’ or ‘positive welfare’ is not ‘negative suffering’ in this sense (cf. Vinding, 2022).

Re: “I figure most people also disagree with suffering offsetting” — I wish but am not sure this is true. But perhaps most people also haven’t deeply considered what kind of impartial axiology they would reflectively endorse.

Re: “offsetting in epistemology” — interesting points, though I’m not immediately sold on the analogy. :) (Of course, you don’t claim that the analogy is perfect; “there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field”.)

My impression is that population axiology is widely seen as a ‘pick your poison’ type of choice in which each option has purportedly absurd implications and then people pick the view whose implications seem to them intuitively the least ‘absurd’ (i.e. ‘repugnant’). And, similarly, if/when people introduce e.g. deontological side-constraints on top of a purely consequentialist axiology, it seems that one can model the epistemological process (of deciding whether to subscribe to pure consequentialism or to e.g. consequentialism+deontology) as a process of intuitively weighing up the felt ‘absurdity’ (‘repugnance’) of the implications that follow from these views. (Moreover, one could think of the choice criterion as just “pick the view whose most repugnant implication seems the least repugnant”, with no offsetting of repugnance.)

I would think that my post does not necessarily imply an offsetting view in epistemology. After all, when I called my conclusion — i.e. that “the XVRCs generated by minimalist views are consistently less repugnant than are those generated by the corresponding offsetting views” — “a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation”, this doesn’t need to imply that these XVRC comparisons would “offset” any intuitive downsides of minimalist views. All it says, or is meant to say, is that the offsetting XVRCs are comparatively worse. Of course, one may question (and I imagine you would :) whether these XVRC comparisons are the most relevant — or even a relevant — consideration when deciding whether to endorse an offsetting or a minimalist axiology.

 

Error Three [i.e. the related issues that you raised under that heading]

Re: framing about why this matters — the article begins with the hyperlinked claim that “Population axiology matters greatly for our priorities.” It’s also framed as a response to the XVRC article by Budolfon and Spears, so I trust that my article would be read mostly by people who know what population axiology is and why it matters (or quickly find out before reading fully). I guess an article can only be read independently of other sources after people are first sufficiently familiar with some inevitable implicit assumptions an article makes. (On the forum, I also contextualize my articles with the tags, which one can hover over for their descriptions.)

To say that population axiology doesn’t particularly matter seems like a strong claim given that the field seems to influence people’s views on the (arguably quite fundamentally relevant) question of what things do or don’t have intrinsic (dis)value. But I might agree that the field “is confused” given that so much of population axiology entails assumptions, such as Archimedean aggregative frameworks, that often seem to get a free pass without being separately argued for at all.

Regarding the implicit assumptions of population axiology — and re: my not mentioning political philosophy (etc.) — I would note that the field of population axiology seems to be about ‘isolating’ the morally relevant features of the world in an ‘all else equal’ kind of comparison, i.e. about figuring out what makes one outcome intrinsically better than another. So it seems to me that the field of population axiology is by design focused on hard tradeoffs (thus excluding “win/win approaches”) and on “out of context” situations, with the latter meant to isolate the intrinsically relevant aspects of an outcome and exclude all instrumental aspects — even though the instrumental aspects may in practice be more weighty, which I also explore in the series.

One could think of axiology as the theoretical core question of what matters in the first place, and political philosophy (etc.) as the practical questions of how to best organize society around a given axiology or a variety of axiologies interacting / competing / cooperating in the actual complex world. (When people neglect to isolate the core question, I would argue that people often unwittingly conflate intrinsic with instrumental value, which also seems to me a huge flaw in a lot of supposedly isolated thought experiments because these don’t take the isolation far enough for our practical intuitions to register what the imagined situations are actually supposed to be like. I also explored these things earlier in the series.)

 

[the etymology of value lexicality (‘superiority’)]

My attempt to answer this was actually buried in footnote 8 :) > Lexicographic preferences” seem to be named after the logic of alphabetical ordering. Thus, value entities with top priority are prioritized first regardless of how many other value entities there are in the “queue”.

 

[whether a more fitting name for minimalist axiologies is ‘minimalist axiologies’ or ‘minimizing axiologies’]

I think ‘minimalist’ does also work in the other evoked sense that you mentioned, because it seems to me that offsetting axiologies add further assumptions on top of those that are entailed by the offsetting and the minimalist axiologies. For example, my series tends to explore welfarist minimalist axiologies that assume only some single disvalue (such as suffering, or craving, or disturbance), with no second value entity that would correspond to a positive counterpart to this first one (cf. Vinding, 2022). By comparison, offsetting axiologies such as classical utilitarianism are arguably dualistic in that they assume two different value entities with opposite signs. And monism is arguably a theoretically desirable feature given the problem of value incommensurability between multiple intrinsic (dis)values.

 

(Thanks also for the comments on upvote norms. I agree with those. Certainly one shouldn’t be unthinkingly misled into assuming that the community wants to see more of whatever gets upvoted-without-comment, because the lack of comments may indeed reflect some problems that one would ideally fix so as to make things easier to more deeply engage with.)

Load more