I think the discussion under "An outside view on having strong views" would benefit from discussing how much normative ethics is analogous to science and how much it is anologous to something more like personal career choice (which weaves together personal interests but still has objective components where research can be done -- see also my post on life goals).
FWIW, I broadly agree with your response to the objection/question, "I’m an anti-realist about philosophical questions so I think that whatever I value is right, by my lights, so why should I care about any uncertainty across theories? Can’t I just endorse whatever views seem best to me?"
As forum readers probably know by now, I think anti-realism is obviously true, but I don't mean the "anything goes" type of anti-realism, so I'm not unsympathetic to your overall takeaway.
Still, even though I agree with your response to the "anything goes" type of anti-realism, I think you'd ideally want to engage more with metaethical uncertainty and how moral reflection works if (the more structure-containing) moral anti-realism is true.
I've argued previously that moral uncertainty and moral realism are in tension.
The main argument in that linked post goes as follows: Moral realism implies the existence of a speaker-independent moral reality. Being morally uncertain means having a vague or unclear understanding of that reality. So there’s a hidden tension: Without clearly apprehending the alleged moral reality, how can we be confident it exists?
In the post, I then discuss three possible responses for resolving that challenge and explain why I think those responses all fail.
What this means is that moral uncertainty almost by necessity (there's a trivial exception where your confidence in moral realism is based on updating to someone's else's expertise but they have not yet told you the true object-level morality that they believe in) implies either metaethical uncertainty (uncertainty between moral realism and moral anti-realism) or confident moral anti-realism.
That post has been on the EA forum for 3 years and I've not gotten any pushback on it yet, but I've also not seen people start discussing moral uncertainty in a way that I don't feel like sounds subtly off or question-begging in light of what I pointed out. Instead, I think one should ideally discuss how to reason under metaethical uncertainty or how to do moral reflection within confident moral anti-realism.
If anyone is interested, I spelled out how I think we would do that here:
The “Moral Uncertainty” Rabbit Hole, Fully Excavated
It's probably one of the two pieces of output I'm most proud of. My earlier posts in the anti-realism sequence covered ideas that I thought many people already understood, but this one let me contribute some new insights. (Joe Carlsmith has written similar stuff and writes and explains things better than I do— I mention some of his work in the post.)
If someone just wants to read the takeaways and not the underlying arguments for why I think those takeways apply, here they are:
To list a few takeaways from this post, I made a list of good and bad reasons for deferring (more) to moral reflection. (Note, again, that deferring to moral reflection comes on a spectrum.)
In this context, it’s important to note that deferring to moral reflection would be wise if moral realism is true or if idealized values are “here for us to discover.” In this sequence, I argued that neither of those is true – but some (many?) readers may disagree.
Assuming that I’m right about the flavor of moral anti-realism I’ve advocated for in this sequence, below are my “good and bad reasons for deferring to moral reflection.”
(Note that this is not an exhaustive list, and it’s pretty subjective. Moral reflection feels more like an art than a science.)
Bad reasons for deferring strongly to moral reflection:
- You haven’t contemplated the possibility that the feeling of “everything feels a bit arbitrary; I hope I’m not somehow doing moral reasoning the wrong way” may never go away unless you get into a habit of forming your own views. Therefore, you never practiced the steps that could lead to you forming convictions. Because you haven’t practiced those steps, you assume you’re far from understanding the option space well enough, which only reinforces your belief that it’s too early for you to form convictions.
- You observe that other people’s fundamental intuitions about morality differ from yours. You consider that an argument for trusting your reasoning and your intuitions less than you otherwise would. As a result, you lack enough trust in your reasoning to form convictions early.
- You have an unreflected belief that things don’t matter if moral anti-realism is true. You want to defer strongly to moral reflection because there’s a possibility that moral realism is true. However, you haven’t thought about the argument that naturalist moral realism and moral anti-realism use the same currency, i.e., that the moral views you’d adopt if moral anti-realism were true might matter just as much to you.
Good reasons for deferring strongly to moral reflection:
- You don’t endorse any of the bad reasons, and you still feel drawn to deferring to moral reflection. For instance, you feel genuinely unsure how to reason about moral views or what to think about a specific debate (despite having tried to form opinions).
- You think your present way of visualizing the moral option space is unlikely to be a sound basis for forming convictions. You suspect that it is likely to be highly incomplete or even misguided compared to how you’d frame your options after learning more science and philosophy inside an ideal reflection environment.
Bad reasons for forming some convictions early:
- You think moral anti-realism means there’s no for-you-relevant sense in which you can be wrong about your values.
- You think of yourself as a rational agent, and you believe rational agents must have well-specified “utility functions.” Hence, ending up with under-defined values (which is a possible side-effect of deferring strongly to moral reflection) seems irrational/unacceptable to you.
Good reasons for forming some convictions early:
- You can’t help it, and you think you have a solid grasp of the moral option space (e.g., you’re likely to pass Ideological Turing tests of some prominent reasoners who conceptualize it differently).
- You distrust your ability to guard yourself against unwanted opinion drift inside moral reflection procedures, and the views you already hold feel too important to expose to that risk.
See here. Though the wording could be tidied up a bit.
I read that now and think there's something to the idea that some animals suffer less from death/injury than we would assume (if early death is a statistical near-certainty for those animals and there's nothing they can do to control their luck there, so they'd rather focus on finding mates/getting the mating ritual right, which is about upsides more than downsides). The most convincing example I can think of are mayflies. It seems plausible that mayflies (who only live 1-2 days in their adult form) don’t suffer when they get injured because avoiding injury is a comparatively low priority. (I remember reading that there's behavioral evidence that some adult insects keep eating or mating even as they get seriously physically injured, which supports this point. At the same time, this isn't the case with all insects and may not even be the case for the larval stage of the adult insect in question: Mayfly *nymphs* – the baby stage – live a lot longer before they morph into adult mayflies, and their nymph lifestyle involves less seeking and risk-taking behavior and more maintenance and avoidance behavior.)
This is a bit nitpicky, but I would flag that the above is somewhat orthogonal to the r-/K-selection distinction, and that this distinction doesn't seem to carve reality at its joints particularly well, in the first place. Claude claims that sea turtles qualify as K-selected since they don't reach fertility quickly and have long lifespans (50-80+ years). At the same time, they have huge infant mortality. Thinking back to the nature documentaries I watched, I don't recall that the baby turtles seemed aware of predators -- so I'm sympathetic to the view that all that is on their mind is excitedly getting to the ocean for that amazing swimming feeling. Still, since they're long-lived when they succeed, they probably need to learn to look after their limbs and bodies, so I also suspect that, unfortunately, getting eaten by birds or crocodiles is very painful for them. Evolution lacks compassion, so it won't pay the extra cost to only turn on "pain when your limbs get injured" after the turtles made it through the most difficult first couple of hours or days.
Claude btw also says that bees are K-selected because the parental investment is high -- but that seems like another edge case and some of the logic you mentioned regarding bees and eusociality does seem plausible to me (even if I would put very little weight on it compared to considerations like "when we observe them, do they show signs of distress, and how often?").
Male elephant seals are also K-selected even though only 5-10% of them successfully reproduce. (You might think that the successful ones experience so much pleasure that it's worth all the frustration of the unsuccessful ones -- but that's questionable and it may also be that being an unsuccessful male elephant seal is particularly unpleasant because their experience may dominated by status anxiety and sexual frustration.)
Next to species longevity leading to the need to look after one's limbs and body, another thing that I think matters a lot for species welfare ranges is whether animals have prey animal psychology. For animals who are aware enough to understand the concept of predation (hopefully baby turtles will not qualify here just yet?), predation often seems like a massive source of stress and suffering even if the animal is not currently under attack. I’ve read that some prey animals exhibit signs of PTSD in the mere presence of predators. Mice can die from anxiety/stress when they are trapped in an area where they don’t feel like they can hide. In the book series Animorphs, the idea of being a shrew is portrayed as stress- and fear-dominated (which left quite the impression on me as a kid). While I understand that this is fiction rather than facts-based, it does seem pretty congruent with how I'd feel if I imagine being a mouse or shrew.
By contrast, while marmots are technically prey animals too, they probably have much less of a prey animal psychology (or at least one that isn't constantly "on") because they can at least feel very safe whenever they go inside their burrows – no snakes high up in the mountains, foxes are too big to fit inside the burrow, and predatory birds are bad at fighting underground so they don't go into the burrows either, even though they'd probably fit in there. (Being a marmot also seems extra cozy because part of their strategy is to slow down their metabolism and just chill during the winter.)
These considerations about the interaction of threats, places of safety, how this affects animal psychology, etc., gets me to a more general critique of the economics reasoning that underlies some of the methodology here. It seems too simplistic to me and it seems to misunderstand what suffering is about.
As Anni Leskelä writes in a post on whether social animals suffer more:
Contrary to the standard biology textbook view, suffering is more than just a signal of a harmful situation. Intense suffering especially is primarily a motivational state that facilitates not only direct avoidance of harmful acts and environments but also complex decisions under threat or risk, long-term learning, social investment and bonding, competition and communicating, all depending on the other aspects of an animal’s evolutionary history, cognition, and lifestyle.
[...]Suffering as a motivational state is typically the mental component of an animal’s homeostatic regulation, i.e. the processes that keep all the relevant physiological variables between healthy parameters. Most things that threaten your homeostasis in a way that humans have historically been able to survive when motivated to do so will cause some kind of suffering: thirst when your blood volume starts to drop, pain when a wound opens and leaves you vulnerable to pathogens and blood loss, sickness when you have ingested toxins and need to expel them. When the threat isn’t currently actual but can pretty reliably be predicted to come true unless you take physiological or behavioural precautions, your species will evolve predictive homeostatic processes. Many of these predictive processes are cognitive or emotional in nature, e.g. people often feel distress in darkness and high places – things that cause absolutely no damage in themselves, but correlate with future homeostatic disturbances.
(What I call "prey animal psycholgy" is an instance of those predictive processes, as are anxiety disorders in humans.) I feel like these interactions between "situations where the animal's reward circuit fires negative/positive rewards" and "how the animal develops negative or positive feelings that are somehow about that reward, but they come up in other situations via learning," call into question the applicability of cost-balancing around reward circuitry and animal reward signals. All of that seems to be overshadowed by some of the ways that second-order negative feelings (negative feelings that are about the positive or negative signals from the reward circuit) seem asymmetric from second-order positive feelings. Namely, there are more ways to not get positive reward than there are ways to get positive reward, so animals will often be hungry, horny, struggle with addiction (and positive reward wearing off/becoming less satisfying), feel like they don't have enough of something, etc, even if there's a sense in which first-order reward signals would be symmetric or equally easily available/avoidable in the environment. Relatedly, there's the (generalized) Anna Karenina principle (both in relation to psychology and biology): there are more ways for things to be off rather than perfect, so it rarely makes sense for animal to feel like all is good the way it is (unless you're a marmot during hibernation!). Things can also go wrong in a mechanistic, biological way and cause chronic pain and conditions for extreme unhappiness. For instance, post-viral malaise and fatigue syndromes (which existed before Covid, possibly 1% of the US population already had significant issues of that sort, and it's more prevalent in world regions where specific illnesses are common, like dengue fever). It seems to me that natural selection doesn't "see" those causes of chronic suffering in an appropriately proportional way, because it's not costly to create the conditions for chronic suffering (it's the opposite -- it would be costly to make the organism safe from malfunctions of that sort). Unfortunately, there's no equally-frequent counterbalancing phenomenon where things happen to coincidentally go particularly well and then the person is chronically super blissed out and chronically invulnerable. (Some people are genetically very lucky or have life go well so that success attracts more success, but it's not nearly equally common. Personally, I also think that the depths of things going wrong are higher than the highs of when they go right, but I acknowledge that this is a contested subjective impression.)
Lastly, in humans, there's also some phenotypic variation in life-history strategies, "fast" and "slow". Fast is associated with things we tend to think of as bad for welfare, such as cluster B personality disorder, low parental investment, unpredictable childhood stress, etc. Sure, cluster B personality disorders are not just associated with increased suicidality and other negative life outcomes, they are also associated with periods of (hypo)mania, or BPD is sometimes said to have extreme emotional highs that other people don't get to experience. And maybe there's some truth to that. But insofar as we are inclined to think that fast-paced life-history strategies in humans aren't that great for individuals well-being-wise, this again calls into question why natural selection would somehow manage to make success so good in fast-selected animals at the species level that it outweighs all the statistically more common instances where life fails.
(I'm aware that a lot of that was very unrelated to bees -- I ended up going down various detours because they seemed interesting and I wanted to illustrate how little I think of these evolutionary cost-balancing approaches, since there are other concerns that I deem to be way more straightforward and stronger. FWIW, even Zach Groff in his talk seems to flag that we should interpret these things with a lot of caution and that their main takeaway is uncertainty and correcting a previous mistake in a calculation, rather than some concrete/strong takeaway about anything welfare-related in particular.)
Before I engage further, may I ask if you believe that suffering vs pleasure intensity is comparable on the same axis? Iirc I think I might've read you saying otherwise.
I think they are not on the same axis. (Good that you asked!)
For one thing, I don't think all valuable-to-us experiences are of the same type and "intensity" only makes some valuable experiences better, but not others. (I'm not too attached to this point; my view that positive and negative experiences aren't on the same scale is also based on other considerations.) One of my favorite experiences (easily top ten experience types in my life) is being half asleep cozily in bed knowing that it's way too early to wake up and I just get to sleep in. I think that experience is "pleasurable" in a way, or at least clearly positive/valuable-to-me, but it doesn't benefit from added intensity and the point of the experience is more about "everything is just right" rather than "wow this feels so good and I want more of it."
Sex or eating one's favorite food have a compulsive element to it, they have arrows of volition pointing at the content of the experience and wanting more of it. By contrast, cozy half-sleep (or hugging one's life partner in romantic love that is no longer firework-feelings-type love) feel good because the arrows of volition are taking time off. (Or maybe we can say that they loop around and signal that everything is perfect the way it is and our mind gets to rest.)
If all positive experiences resembled each other as "satisfied cravings" the way it works with sex and eating one's favorite food, then I'd be a bit more open to the idea that positive and negative experiences are on the same scale. However, even then, I'd point out -- and that point actually feels a lot stronger to me for compulsive pleasures than it does for "everything is right" types of positive experiences -- that "the value of pleasures," and the great lengths we sometimes go for them behaviorally, seems to be a bit of a trick of the mind, and that suffering arguably plays a more central role in addictive pleasure-seeking tendencies than pleasure itself does.
(The following is based on copy-pasted text snippets from stuff I wrote elsewhere non-publically:)
In Narnia, the witch hands one of the children a piece of candy so pleasurable to eat that the child betrays his siblings for the prospect of a second candy. The child felt internally conflicted during that episode: He would surely have walked through lava for a second piece of candy, but not without an intense sense of despair about how his motivational system had been broken by the evil witch.
We can distinguish between:
By 1. I don’t mean that one would be walking into lava joyously. Even the most ardent personal hedonists are going to feel uneasy before they actually step into the lava. But the people to whom 1. applies endorse the parts of their psychology that make superpleasures viscerally appealing. By contrast, people to whom 2. applies would rather not feel compelled to pursue superpleasures when they lie behind a river of lava. People familiar with addiction can probably relate to the sense of “Why am I doing this?” that befalls someone when they find themselves going through great inconveniences to fuel their addiction.
So, my point is that it's an added step, an extra decision, to consider pleasures valuable to the degree that experiencing them triggers our visceral and addictive sense of "omg I want more of that." (People's vulnerability to addiction also differs. Does that mean addiction-prone individuals experience stronger pleasures, or are their minds merely more susceptible to developing cravings towards certain pleasures? Is there even a difference here for functionalists? If there isn't, this would illustrate that there's something problematic about the idea of an objective scale on the value of experiences that's properly and universally linked to correct human behavior in pleasure-suffering tradeoffs.) I think it's a perfectly rational stance to never want to get addicted to pleasures enough to want to walk through lava for the prospect of intense and prolonged (think: centuries of bliss) pleasures. This forms a counterargument to the idea that we can just measure/elicit via experiments, "how much does this person want to trade off pleasure vs pain behaviorally" to see how they compare on some objective scale.
So far, I spoke of "addictive pleasure-seeking." I think there's a second motivational mode where we pursue things not because we feel cravings in the moment, but because we have a sohpisticated world model (unlike other animals) and have decided that there are things within that world model that we'd like to pursue even if they may not lead to us having the most pleasure. The interesting thing about that reflection-based (as opposed to cravings-/needs-based motivation) form/mode of motivation is that it's very open-ended. People are agentic to different degrees and they set for themselves different types of goals. Some people don't pursue personal hedonic pleasures but they have long-term plans related to existentialist meaning like the EA mission, or protecting/caring for loved ones. (We can imagine extreme examples where people voluntarily go to prison for a greater cause, disproving the notion that everyone is straightforwardly motivated by personal pleasure.)
There's an inherent tension in the view that hedonism is the rational approach to living. Part of the appeal of hedonism is that we just want pleasure, but adopting an optimization mindset toward it leads to a kind of instrumentalization of everything "near term." If you set the life goal of maximizing the number of your happy days, the rational way to go about your life probably implies treating the next decades as "instrumental only." On a first approximation, the only thing that matters is optimizing the chances of obtaining indefinite life extension (potentially leading to more happy days). Through adopting an outcome-focused optimizing mindset, seemingly self-oriented concerns such as wanting to maximize the number of happiness moments turn into an almost other-regarding endeavor. After all, only one’s far-away future selves get to enjoy the benefits – which can feel essentially like living for someone else.
To be a good hedonist, someone has to disentangle the part of their brain that cares about short-term pleasure from the part of them that does long-term planning. In doing so, they now prove that they’re capable of caring about something other than their pleasure. It is now an open question whether they use this disentanglement capability for maximizing pleasure or for something else that motivates them to act on long-term plans (such as personal meaning like the EA mission, or protecting/caring for loved ones). Relatedly, even if a person decided that they wanted self-oriented happiness, it is an open question whether they go for the rationalist idea of wanting to maximize happy life years, or for something more holistic and down to earth like wanting to make some awesome meaningful memories with loved ones without obsessing over longevity, and considering life "well-lived" if one has finished one's most important life projects, even if one only makes it into one's late forties or fifties or sixties, or whatever. (The ending of "The Good Place" comes to mind for me, for those who've seen the series, though the people in there have lived longer lives compared to the world's population at present.)
And, sure, we can say similar things about reducing suffering: it's perfectly possible for people to give their own suffering comparatively little weight compared to things like achieving a mission that one deems sacred. (But there's always something that seems relevant that is bad about suffering, because even in a mind that has accepted suffering as an necessary condition to achieve other goals, there are parts of the mind that brace against the suffering in the moment someone is suffering.) I think suffering is what matters by default/in the absence of other overriding considerations, but when someone decides for themselves that there are things that matter to them more than their own suffering, then that's something we should definitely respect.
The thing with nonhuman animals like bees is that they lack the capacity to decide those things, which is why it's under-defined how they would decide if they could think about it. Treating them the suffering-focused way seems safest/most parsimonious to me, but I don't necessarily think that treating them with hedonist intuitions (and trying to guess at where they would place the hedonic zero point that is only really a meaningful concept if we grant some of the premises of hedonist axiology) is contradciting something obvious that's happening inside the bees. Personally, I find it "less parsimonious/less elegant," but that's a subjective judgment that's probably influenced by idiosyncratic features of my psychology (perhaps because I'm particularly fond of "everything is right" types of positive experiences, and not adventure-seeking). I mostly just think "bee values" are under-defined on this topic and that there's no "point of view of the universe."
For eusocial insects like bees in particular, evolution ought to incentivize them to have net positive lives as long as the hive is doing well overall.
There might be a way to salvage what you're saying, but I think this stuff is tricky.
I voted 65% but I think anti-realism is obviously true or we're using words differently.
To see whether we might be using words differently, see this post and this one.
To see why I still voted 65% on "objective" and not 0%, see this post. (Though, on the most strict meanings of "objective," I would put 0%.)
If we agree on what moral realism means, here's the introduction to the rest of my sequence on why moral realism is almost certainly false.
Thus, if consumers viewed plant-based meat and cultivated meat as perfect substitutes, cultivated meat would have a net negative effect since plant-based alternatives perform better both environmentally and in terms of animal welfare (albeit marginally for the latter).
"Marginally for the latter" -- that still seems like good news for people who care primarily about animal wellbeing. The way I see it, the environment is not that good a thing anyway (wild animal suffering makes it negative according to my values, and even if others care less about it or care more about aesthetic stuff, surely it moves it quite a lot of the way towards being just neutral), plus there are potentially ways to reverse the effect of greenhouse gas emissions. By contrast, you cannot reverse the direct suffering caused in factory farming.
Imagine delegates of views you find actually significantly appealing. (At that level, I think the original post here is correct and your delegates will either use all their caring capacity for helping insects, or insects will be unimportant to them.) Instead of picking one of these delegates, you go with their compromise solution that might look something like, "Ask yourself if you have a comparative advantage at helping insects -- If not, stay on the lookout for low-effort ways to help insects and low-effort ways to avoid causing great harm to the cause of helping insects, but otherwise do things that other delegates would prioritize where you have more of a comparative advantage."
If you view all of morality as "out there" and objective, this approach might seem a bit unsatisfying because -- on that view -- either insects matter, or they don't. But if Brian Tomasik is right about consciousness and if morality even as an effective altruist is still quite a lot about finding out "What motivates me to get up in the morning?," rather than "What's the one objectively important aim that all effective altruists should pursue?," then saulius's point goes through, IMO.
You can have a moral parliament view not just as an approach to moral uncertainty, but also as your approach to undecidedness about what to do in light of all the arguments and appeals you find yourself confronted with. There's no guarantee that the feeling of undecidedness will go away under ideal conditions for moral reflection, in which case it would probably feel arbitrary and unsatisfying to go with an overall solution that says "insects matter by far the most" or "insects hardly matter at all as a cause area."
I think there are two competing failure modes:
(1) The epistemic community around EA, rationality, and AI safety, should stay open to criticism of key empirical assumptions (like the level of risks from AI, risks of misalignments, etc.) in a healthy way.
(2) We should still condemn people who adopt contrarian takes with unreasonable-seeming levels of confidence and then take actions based on them that we think are likely doing damage.
In addition, there's possibly also a question of "how much do people who benefit from AI safety funding and AI safety association have an obligation to not take unilateral actions that most of the informed people in the community consider negative." (FWIW I don't think the obligation here would be absolute even if Epoch had been branded as centrally 'AI safety,' and I acknowledge that the branding issue seems contested; also, it wasn't Jamie [edit: Jaime] the founder who left in this way, and of the people who went off to found this new org, Matthew Barnett, for instance, has been really open about his contrarian takes, so insofar as Epoch's funders had concerns about the alignment of employees at Epoch, it was also -- to some degree, at least -- on them to ask for more information or demand some kind of security guarantee if they felt worried. And maybe this did happen -- I'm just flagging that I don't feel like we onlookers necessarily have the info, and so it's not clear whether anyone has violated norms of social cooperation here or whether we're just dealing with people getting close to the boundaries of unilateral action in a way that is still defensible because they've never claimed to be more aligned than they were, never accepted funding that came with specific explicit assumptions, etc.)
On reflection, it's certainly possible that I was assuming we had more evidence on suffering/wellbeing in nature (and in bees specfically) than we do. I haven't looked into it too much and it intuitively felt to me like we could probably do better than the evolutionary reasoning stuff, but maybe the other available lines of evidence are similarly brittle.
That might be right -- I didn't read the original post and I commented on your post not because I wanted to defend a particular side in the bee debate, but rather because I always found the evolutionary welfare arguments fascinating but dubious. I somehow decided to use this opportunity to get more towards the bottom of them. :)