antimonyanthony's Shortform

9 comments, sorted by Highlighting new comments since Today at 5:43 AM
New Comment

Some reasons not to primarily argue for veganism on health/climate change grounds

I've often heard animal advocates claim that since non-vegans are generally more receptive to arguments from health benefits and reducing climate impact, we should prioritize those arguments, in order to reduce farmed animal suffering most effectively.

On its face, this is pretty reasonable, and I personally don't care intrinsically about how virtuous people's motivations for going vegan are. Suffering is suffering, no matter its sociological cause.

But there are some reasons I'm nervous about this approach, at least if it comes at the opportunity cost of moral advocacy. None of these are original to me, but I want to summarize them here since I think this is a somewhat neglected point:

  1. Plausibly many who are persuaded by the health/CC arguments won't want to make the full change to veganism, so they'll substitute cows for chickens and fish. Both of which are evidently less bad for one's health and CC risk, but because these animals are so small and have fewer welfare protections, this switch causes a lot more suffering per calorie. More speculatively, there could be a switch to insect consumption.
  2. Health/CC arguments don't apply to reducing wild animal suffering, and indeed emphasizing environmental motivations for going vegan might exacerbate support for conservation for its own sake, independent of individual animals' welfare. (To be fair, moral arguments can also backfire if the emphasis is on general care for animals, rather than specifically preventing extreme suffering.)
  3. Relatedly, health/CC arguments don't motivate one to oppose other potential sources of suffering in voiceless sentient beings, like reckless terraforming and panspermia, or unregulated advanced simulations. This isn't to say all anti-speciesists will make that connection, but caring about animals themselves rather than avoiding exploiting them for human-centric reasons seems likely to increase concern for other minds.
  4. While the evidence re: CC seems quite robust, nutrition science is super uncertain and messy. Based on both this prior about the field and suspicious convergence concerns, I'd be surprised if a scientific consensus established veganism as systematically better for one's health than alternatives. That said, I'd also be very surprised about a consensus that it's worse, and clearly even primarily ethics-based arguments for veganism should also clarify that it's feasible to live (very) healthily on a vegan diet.

Quick comment. With respect to your first point, this has always struck me as one of the better points as to why non ethical arguments should primarily avoided when it comes to making the case for veganism. However, after reading Tobias Leenaert's 'How to Create a Vegan World: A Pragmatic Approach', I've become a bit more agnostic on this notion.  He notes a few studies from The Humane League that show that red-meat reducers/avoiders tend to eat less chicken than your standard omnivore. He also referenced a few studies from Nick Cooney's book, Veganomics, which covers some of this on p. 107-111. Combined with the overall impact non-ethical vegans could have on supply/demand for other vegan products (and their improvement in quality), I've been a bit less worried about this reason.

I think your other reasons are all extremely important and underrated, though, so still lean overall that the ethical argument should be relied on when possible :)  

Wow, that's promising news! Thanks for sharing.

The Repugnant Conclusion is worse than I thought

At the risk of belaboring the obvious to anyone who has considered this point before: The RC glosses over the exact content of happiness and suffering that are summed up to the quantities of “welfare” defining world A and world Z. In world A, each life with welfare 1,000,000 could, on one extreme, consist purely of (a) good experiences that sum in intensity to a level 1,000,000, or on the other, (b) good experiences summing to 1,000,000,000 minus bad experiences summing (in absolute value) to 999,000,000. Similarly, each of the lives of welfare 1 in world Z could be (a) purely level 1 good experiences, or (b) level 1,000,001 good experiences minus level 1,000,000 bad experiences.

To my intuitions, it’s pretty easy to accept the RC if our conception of worlds A and Z is the pair (a, a) from the (of course non-exhaustive) possibilities above, even more so for (b, a). However, the RC is extremely unpalatable if we consider the pair (a, b). This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony.

To drive home how counterintuitive that is, we can apply the same reasoning often applied against NU views: Suppose the level 1,000,001 happiness in each being in world Z is compressed into one millisecond of some super-bliss, contained within a life of otherwise unremitting misery. There doesn’t appear to be any temporal ordering of the experiences of each life in world Z such that this conclusion isn’t morally absurd to me. (Going out with a bang sounds nice, but not nice enough to make the preceding pure misery worth it; remember this is a millisecond!) This is even accounting for the possible scope neglect involved in considering the massive number of lives in world Z. Indeed, multiplying these lives seems to make the picture more horrifying, not less.

Again, at the risk of sounding obvious: The repugnance of the RC here is that on total non-NU axiologies, we’d be forced to consider the kind of life I just sketched a “net-positive” life morally speaking.[2] Worse, we're forced to consider an astronomical number of such lives better than a (comparatively small) pure utopia.


[1] “Negative” here includes lexical and lexical threshold views.

[2] I’m setting aside possible defenses based on the axiological importance of duration. This is because (1) I’m quite uncertain about that point, though I share the intuition, and (2) it seems any such defense rescues NU just as well. I.e. one can, under this principle, maintain that 1 hour of torture-level suffering is impossible to morally outweigh, but 1 millisecond isn’t.

This conclusion, which is entailed by any plausible non-negative[1] total utilitarian view, is that a world of tremendous happiness with absolutely no suffering is worse than a world of many beings each experiencing just slightly more happiness than those in the first, but along with tremendous agony.

It seems to me that you're kind of rigging this thought experiment when you define an amount of happiness that's greater than an amount of suffering, but you describe the happiness as "slight" and the suffering as "tremendous", even though the former is larger than the latter.

I don't call the happiness itself "slight," I call it "slightly more" than the suffering (edit: and also just slightly more than the happiness per person in world A). I acknowledge the happiness is tremendous. But it comes along with just barely less tremendous suffering. If that's not morally compelling to you, fine, but really the point is that there appears (to me at least) to be quite a strong moral distinction between 1,000,001 happiness minus 1,000,000 suffering, and 1 happiness.

Crosspost: "Tranquilism Respects Individual Desires"

I wrote a defense of an axiology on which an experience is perfectly good to the extent that it is absent of craving for change. This defense follows in part from a reductionist view of personal identity, which is usually considered in EA circles to be in support of total symmetric utilitarianism, but I argue that this view lends support to a form of negative utilitarianism.

Some vaguely clustered opinions on metaethics/metanormativity

I'm finding myself slightly more sympathetic to moral antirealism lately, but still afford most of my credence to a form of realism that would not be labeled "strong" or "robust." There are several complicated propositions I find plausible that are in tension:

1. I have a strong aversion to arbitrary or ad hoc elements in ethics. Practically this cashes out as things like: (1) rejecting any solutions to population ethics that violate transitivity, and (2) being fairly unpersuaded by solutions to fanaticism that round down small probabilities or cap the utility function.

2. Despite this, I do not intrinsically care about the simplicity of a moral theory, at least for some conceptions of "simplicity." It's quite common in EA and rationalist circles to dismiss simple or monistic moral theories as attempting to shoehorn the complexity of human values into one box. I grant that I might unintentionally be doing this when I respond to critiques of the moral theory that makes most sense to me, which is "simple." But from the inside I don't introspect that this is what's going on. I would be perfectly happy to add some complexity to my theory to avoid underfitting the moral data, provided this isn't so contrived as to constitute overfitting. The closest cases I can think of where I might need to do this are in population ethics and fanaticism. I simply don't see what could matter morally in the kinds of things whose intrinsic value I reject: rules, virtues, happiness, desert, ... When I think of these things, and the thought experiments meant to pump one's intuitions in their favor, I do feel their emotional force. It's simply that I am more inclined to think of them as just that: emotional, or game theoretically useful constructs that break down when you eliminate bad consequences on conscious experience. The fact that I may "care" about them doesn't mean I endorse them as relevant to making the world a better place.

3. Changing my mind on moral matters doesn't feel like "figuring out my values." I roughly know what I value. Many things I value, like a disproportionate degree of comfort for myself, are things I very much wish I didn't value, things I don't think I should value. A common response I've received is something like: "The values you don't think you 'should' have are simply ones that contradict stronger values you hold. You have meta-preferences/meta-values." Sure, but I don't think this has always been the case. Before I learned about EA, I don't think it would have been accurate to say I really did "value" impartial maximization of good across sentient beings. This was a value I had to adopt, to bring my motivations in line with my reasons. Encountering EA materials did not feel at all like "Oh, you know what, deep down this was always what I would've wanted to optimize for, I just didn't know I would've wanted it."

4. The question "what would you do if you discovered the moral truth was to do [obviously bad thing]?" doesn't make sense to me, for certain inputs of [obviously bad thing], e.g. torturing all sentient beings as much as possible. For extreme inputs of that sort, the question is similar to "what would you do if you discovered 2+2=5?" For less extreme inputs, such that it's plausible to me I simply have not thought through ethics enough that I could imagine that hypothetical but merely find it unlikely right now, the question does make sense, and I see nothing wrong with saying "yes." I suspect many antirealists do this all the time, radically changing their minds on moral questions due to considerations other than empirical discoveries, and they would not be content saying "screw the moral truth" by retaining their previous stance.

5. I do not expect that artificial superintelligence would converge on The Moral Truth by default. Even if it did, the convergence might be too slow to prevent catastrophes. But I also doubt humans will converge on this either. Both humans and AIs are limited by our access only to our "own" qualia, and indeed our own present qualia. The kind of "moral realism" I find plausible with respect to this convergence question is that convergence to moral truth could occur for a perfectly rational and fully informed agent, with unlimited computation and - most importantly - subjective access to the hypothetical future experiences of all sentient beings. These conditions are so idealized that I am probably as pessimistic about AI as any antirealist, but I'm not sure yet if they're so idealized that I functionally am an antirealist in this sense.