In this topic, you share a text relevant to EA, such as an article, essay, blog post, book or academic paper. I tell you three errors.
I’m offering to help EA by finding errors that people didn’t know about. Please only submit texts for which knowing errors would be valuable to you. I hope this will be useful and appreciated.
Details
If I post errors for your text, you must choose to debate one of the errors with me or choose not to debate. A one sentence reply explicitly opting out of debating is fine, but silence violates the game rules. Other feedback, such as which errors you agree or disagree with, is also welcome.
I only guarantee to do this for up to 5 submissions made within 5 days. First come, first serve. Limit 1 per person.
You must have already read the text in full yourself and like it a lot. (If you skipped reading notes or appendices, that’s fine, but state it.)
I must be able to find a free, electronic copy of the text. I can frequently find this for paywalled texts. If you already have a link or the file itself, please send it to me (DMs are fine).
If I can’t find three errors, I’ll say that. I don’t expect this to come up much. If it does, my expectation was wrong. I have two beliefs here. First, I’ll be able to find errors in texts that I disagree with. Second, people here are likely to share stuff I have disagreements with. To give a number, I predict finding errors for at least 80% of texts.
I expect prose texts over 1000 words long that say something reasonably substantial and complex. Otherwise I may aim to find fewer errors.
I’m not agreeing to read the whole text. My plan is to read enough to find three errors then stop. If something is addressed in a part I didn’t read, you can tell me and I’ll respond. I have experience replying based on partial reading and it’s rarely a problem.
I will only post errors that I consider important. If you consider an error unimportant, let me know and I’ll explain my perspective. You’re welcome to do that before either choosing an error to debate or choosing not to debate. You may want to state why you think it’s unimportant so I can address your reasoning, but I can explain importance regardless.
Your EA forum account must have been created in Sept 2022 or earlier.
Bonuses
Maybe this will inspire someone else to host the same game or a similar game.
This can serve as some examples of replying to the first error (well, first three).
Thanks for the screencast. I listened to it — with a ‘skip silence’ feature to skip the typing parts — instead of watching, so I may have missed some points. But I’ll comment on some points that felt salient to me. (I opt out of debating due to lack of time, as it seems that we may not have that many relevantly diverging perspectives to try to bridge.)
Good catch; the rough definition that I used for Archimedean views — that “quantity can always substitute for quality” — was actually from this open access version.
Here, the main point (for my examination of Archimedean and lexical views) is just that Archimedean views always imply the “can add together” part (i.e. aggregation & outweighing), and that Archimedean views essentially deny the existence of any “strict” morally relevant qualitative differences over and above the quantitative differences (of e.g. two intensities of suffering). By comparison, lexical views can entail that two different intensities of suffering differ not only in terms of their quantitative intensity but also in terms of a strict moral priority (e.g. that any torture is worse than any amount of barely noticeable pains, all else equal).
I agree that money and debt are good examples of ‘positive’ and ‘negative’ values that can sometimes be aggregated in the way that offsetting requires; after all, it seems reasonable for some purposes to model debt as negative money. We also seem to agree that ‘happiness’ or ‘positive welfare’ is not ‘negative suffering’ in this sense (cf. Vinding, 2022).
Re: “I figure most people also disagree with suffering offsetting” — I wish but am not sure this is true. But perhaps most people also haven’t deeply considered what kind of impartial axiology they would reflectively endorse.
Re: “offsetting in epistemology” — interesting points, though I’m not immediately sold on the analogy. :) (Of course, you don’t claim that the analogy is perfect; “there's overlap in the reasons for why they're wrong, so it's problematic (though not necessarily wrong) to favor them in one field while rejecting them in another field”.)
My impression is that population axiology is widely seen as a ‘pick your poison’ type of choice in which each option has purportedly absurd implications and then people pick the view whose implications seem to them intuitively the least ‘absurd’ (i.e. ‘repugnant’). And, similarly, if/when people introduce e.g. deontological side-constraints on top of a purely consequentialist axiology, it seems that one can model the epistemological process (of deciding whether to subscribe to pure consequentialism or to e.g. consequentialism+deontology) as a process of intuitively weighing up the felt ‘absurdity’ (‘repugnance’) of the implications that follow from these views. (Moreover, one could think of the choice criterion as just “pick the view whose most repugnant implication seems the least repugnant”, with no offsetting of repugnance.)
I would think that my post does not necessarily imply an offsetting view in epistemology. After all, when I called my conclusion — i.e. that “the XVRCs generated by minimalist views are consistently less repugnant than are those generated by the corresponding offsetting views” — “a strong point in favor of minimalist views over offsetting views in population axiology, regardless of one’s theory of aggregation”, this doesn’t need to imply that these XVRC comparisons would “offset” any intuitive downsides of minimalist views. All it says, or is meant to say, is that the offsetting XVRCs are comparatively worse. Of course, one may question (and I imagine you would :) whether these XVRC comparisons are the most relevant — or even a relevant — consideration when deciding whether to endorse an offsetting or a minimalist axiology.
Re: framing about why this matters — the article begins with the hyperlinked claim that “Population axiology matters greatly for our priorities.” It’s also framed as a response to the XVRC article by Budolfon and Spears, so I trust that my article would be read mostly by people who know what population axiology is and why it matters (or quickly find out before reading fully). I guess an article can only be read independently of other sources after people are first sufficiently familiar with some inevitable implicit assumptions an article makes. (On the forum, I also contextualize my articles with the tags, which one can hover over for their descriptions.)
To say that population axiology doesn’t particularly matter seems like a strong claim given that the field seems to influence people’s views on the (arguably quite fundamentally relevant) question of what things do or don’t have intrinsic (dis)value. But I might agree that the field “is confused” given that so much of population axiology entails assumptions, such as Archimedean aggregative frameworks, that often seem to get a free pass without being separately argued for at all.
Regarding the implicit assumptions of population axiology — and re: my not mentioning political philosophy (etc.) — I would note that the field of population axiology seems to be about ‘isolating’ the morally relevant features of the world in an ‘all else equal’ kind of comparison, i.e. about figuring out what makes one outcome intrinsically better than another. So it seems to me that the field of population axiology is by design focused on hard tradeoffs (thus excluding “win/win approaches”) and on “out of context” situations, with the latter meant to isolate the intrinsically relevant aspects of an outcome and exclude all instrumental aspects — even though the instrumental aspects may in practice be more weighty, which I also explore in the series.
One could think of axiology as the theoretical core question of what matters in the first place, and political philosophy (etc.) as the practical questions of how to best organize society around a given axiology or a variety of axiologies interacting / competing / cooperating in the actual complex world. (When people neglect to isolate the core question, I would argue that people often unwittingly conflate intrinsic with instrumental value, which also seems to me a huge flaw in a lot of supposedly isolated thought experiments because these don’t take the isolation far enough for our practical intuitions to register what the imagined situations are actually supposed to be like. I also explored these things earlier in the series.)
My attempt to answer this was actually buried in footnote 8 :) > “Lexicographic preferences” seem to be named after the logic of alphabetical ordering. Thus, value entities with top priority are prioritized first regardless of how many other value entities there are in the “queue”.
I think ‘minimalist’ does also work in the other evoked sense that you mentioned, because it seems to me that offsetting axiologies add further assumptions on top of those that are entailed by the offsetting and the minimalist axiologies. For example, my series tends to explore welfarist minimalist axiologies that assume only some single disvalue (such as suffering, or craving, or disturbance), with no second value entity that would correspond to a positive counterpart to this first one (cf. Vinding, 2022). By comparison, offsetting axiologies such as classical utilitarianism are arguably dualistic in that they assume two different value entities with opposite signs. And monism is arguably a theoretically desirable feature given the problem of value incommensurability between multiple intrinsic (dis)values.
(Thanks also for the comments on upvote norms. I agree with those. Certainly one shouldn’t be unthinkingly misled into assuming that the community wants to see more of whatever gets upvoted-without-comment, because the lack of comments may indeed reflect some problems that one would ideally fix so as to make things easier to more deeply engage with.)