Brian_Tomasik

908Joined Aug 2014

Comments
215

"Gratitude" just doesn't seem like compelling evidence in itself that the grateful individual has been made better off

What if the individual says that after thinking very deeply about it, they believe their existence genuinely is much better than not having existed? If we're trying to be altruistic toward their own values, presumably we should also value their existence as better than nothingness (unless we think they're mistaken)?

One could say that if they don't currently exist, then their nonexistence isn't a problem. It's true that their nonexistence doesn't cause suffering, but it does make impartial-altruistic total value lower than otherwise if we would consider their existence to be positive.

This page says: "The APRs for unsecured credit cards designed for consumers with bad credit are typically in the range of about 25% to 36%." That's not too far from 40%. If you have almost no money and would otherwise need such a loan, taking $100 now may be reasonable.

There are claims that "Some 56% of Americans are unable to cover an unexpected $1,000 bill with savings", which suggests that a lot of people are indeed pretty close to financial emergency, though I don't know how true that is. Most people don't have many non-401k investments, and they roughly live paycheck to paycheck.

I also think people aren't pure money maximizers. They respond differently in different situations based on social norms and how things are perceived. If you get $100 that seems like a random bonus, it's socially acceptable to just take it now rather than waiting for $140 next year. But it doesn't look good to take out big credit-card loans that you'll have trouble repaying. It's normal to contribute to a retirement account. And so on. People may value being normal and not just how much money they actually have.

That said, most people probably don't think through these issues at all and do what's normal on autopilot. So I agree that the most likely explanation is lack of reflectiveness, which was your original point.

Very plausibly none of these possibilities would meet the lexical threshold, except with very very low probability.

I'm confused. :) War has a rather high probability of extreme suffering. Perhaps ~10% of Russian soldiers in Ukraine have been killed as of July 2022. Some fraction of fighters in tanks die by burning to death:

The kinetic energy and friction from modern rounds causes molten metal to splash everywhere in the crew compartment and ignites the air into a fireball. You would die by melting.

You’ll hear of a tank cooking off as it’s ammunition explodes. That doesn’t happen right away. There’s lots to burn inside a tank other that the tank rounds. Often, the tank will burn for quite awhile before the tank rounds explode.

It is sometimes a slow horrific death if one can’t get out in time or a very quick one. We had side arms and all agreed that if our tank was burning and we were caught inside and couldn’t get out. We would use a round on ourselves. That’s how bad it was.

Some workplace accidents also produce extremely painful injuries.

I don't know what fraction of people in labor wish they were dead, but probably it's not negligible: "I remember repeatedly saying I wanted to die."

These people almost never beg to be killed

It may not make sense to beg to be killed, because the doctors wouldn't grant that wish.

Your reply is an eloquent case for your view. :)

This is one reason to pay extra attention to cases of near-simultaneous comparisons

In cases of extreme suffering (and maybe also extreme pleasure), it seems to me there's an empathy gap: when things are going well, you don't truly understand how bad extreme suffering is, and when you're in severe pain, you can't properly care about large volumes of future pleasure. When the suffering is bad enough, it's as if a different brain takes over that can't see things from the other perspective, and vice versa for the pleasure-seeking brain. This seems closer to the case of "univocal viewpoints" that you mention.

I can see how for moderate pains and pleasures, a person could experience them in succession and make tradeoffs while still being in roughly the same kind of mental state without too much of an empathy gap. But the fact of those experiences being moderate and exchangeable is the reason I don't think the suffering in such cases is that morally noteworthy.

we can better trust people's self-benevolence than their benevolence towards others

Good point. :) OTOH, we might think it's morally right to have a more cautious approach to imposing suffering on others for the sake of positive goods than we would use for ourselves. In other words, we might favor a moral view that's different from MacAskill's proposal to imagine yourself living through every being's experience in succession.

Strong rejection of interpersonal comparisons is also used to argue that relieving one or more pains can't compensate for losses to another individual.

Yeah. I support doing interpersonal comparisons, but there's inherent arbitrariness in how to weigh conflicting preferences across individuals (or sufficiently different mental states of the same individual), and I favor giving more weight to the extreme-suffering preferences.

But if we're in the business of helping others for their own sakes rather than ours, I don't see the case for excluding either one's concern from our moral circle.

That's fair. :) In my opinion, there's just an ethical asymmetry between creating a mind that desperately wishes not to exist versus failing to create a mind that desperately would be glad to exist. The first one is horrifying, while the second one is at most mildly unfortunate. I can see how some people would consider this a failure to impartially consider the preferences of others for their own sakes, and if my view makes me less "altruistic" in that sense, then I'm ok with that (as you suspected). My intuition that it's wrong to allow creating lots of extra torture is stronger than my intuition that I should be an impartial altruist.

If it's not mainly about others and their perspectives, why care so much about shaping (some of) their lives and attending to (some of) their concerns?

The extreme-suffering concerns are the ones that speak to me most strongly.

seems at odds to me with my idea of impartial benevolence, which I would identify more with trying to be a friend to all

Makes sense. While raw numbers count, it also matters to me what the content of the preference is. If 99% of individuals passionately wanted to create paperclips, while 1% wanted to avoid suffering, I would mostly side with those wanting to avoid suffering, because that just seems more important to me.

You're right that there's probably not a strict logical relationship between those things. Also, I should note that I have a poor understanding of the variety of different type-B views. What I usually have in mind as "type B" is the view that the connection between consciousness and brain processing is only something we can figure out a posteriori, by noticing the correlation between the two. If you hold that view, it presumably means you think consciousness is a definite thing that we discover introspectively. For example, we can say we're conscious of an apple in front of us but are not conscious of a very fast visual stimulus. Since we generally assume most of these distinctions between conscious and unconscious events are introspectively clear-cut (though some disagree), there would seem to be a fairly sharp distinction within reality itself between conscious vs unconscious? Hence, consciousness would seem more like a natural kind.

In contrast, the type-A people usually believe that consciousness is a label we give to certain physical processes, and given the complexity of cognitive systems, it's plausible that different people would draw the boundaries between conscious vs unconscious in different places (if they care to make such a distinction at all). Daniel Dennett, Marvin Minsky, and Susan Blackmore are all type-A people and all of them make the case that the boundaries of consciousness are fuzzy (or even that the distinction between conscious and unconscious isn't useful at all).

In theory, there could be a type-A physicalist who believes that there will turn out to be some extremely clean distinction in the brain that captures the difference between consciousness vs unconsciousness, such that almost everyone would agree that this is the right way to carve things up. In this case, the type-A person could still believe consciousness will turn out to be a natural kind.

(I'm not an expert on either the type A/B distinction or natural kinds, so apologies if I'm misusing concepts here.)

Good points!

situations where it is easier for individuals to experience things multiple times in easy-to-process fashion and then form a behavioral response

It's not obvious to me that our ethical evaluation should match with the way our brains add up good and bad past experiences at the moment of deciding whether to do more of something. For example, imagine that someone loves to do extreme sports. One day, he has a severe accident and feels so much pain that he, in the moment, wishes he had never done extreme sports or maybe even wishes he had never been born. After a few months in recovery, the severity of those agonizing memories fades, and the temptation to do the sports returns, so he starts doing extreme sports again. At that future point in time, his brain has implicitly made a decision that the enjoyment outweighs the risk of severe suffering. But our ethical evaluation doesn't have to match how the evolved emotional brain adds things up at that moment in time. We might think that, ethically, the version of the person who was in extreme pain isn't compensated by other moments of the same person having fun.

Even if we think enjoyment can outweigh severe suffering within a life, many people object to extending such tradeoffs across lives, when one person is severely harmed for the benefit of others. The examples in David's comment were about interpersonal tradeoffs, rather than intrapersonal ones. It's true that people impose small risks of extreme suffering on some for the happiness of others all the time, like in the case of driving purely for leisure, but that still begs the question of whether we should do that. Most people in the West also eat chickens, but they shouldn't. (Cases like driving are also complicated by instrumental considerations, as Magnus would likely point out. Also, not driving for leisure might itself cause some people nontrivial levels of suffering, such as by worsening mental-health problems.)

That makes sense, and I think many longtermist animal advocates roughly agree. One concern I have is about what kinds of moral ideas vegism is reinforcing. For example, vegism is normally strongly associated with environmentalism, so maybe it reinforces the idea of "leaving wild animals alone" or even trying to increase populations of wild animals via habitat restoration and rewilding.

That said, as Jacy Reese has argued, maybe most animal-like suffering in the far future will be created by humans rather than natural, in which case how people view wild-animal suffering could be less relevant than how they view human-inflicted suffering like that in factory farms. OTOH, I think there's still a question of whether creatures that inhabit virtual worlds or ancestor simulations of the far future would be seen as "wild" or as directly harmed by humans.

Eight years later, I still think this post is basically correct. My argument is more plausible the more one expects a lot of parts of society to play a role in shaping how the future unfolds. If one believes that a small group of people (who can be identified in advance and who aren't already extremely well known) will have dramatically more influence over the future than most other parts of the world, then we might expect somewhat larger differences in cost-effectiveness.

One thing people sometimes forget about my point is that I'm not making any claims about the sign of impacts. I expect that in many cases, random charities have net negative impacts due to various side effects of their activities. The argument doesn't say that random charities are within 1/100 times as good as the best charities but rather that all charities have a lot of messy side effects on the world, and when those side effects are counted, it's unlikely that the total impact that one charity has on the world (for good or ill) will be many orders of magnitude higher than the total impact that another charity has on the world (for good or ill).

I think maybe the main value of this post is to help people keep in mind how complex the effects of our actions are and that many different people are doing work that's somewhat relevant to what we care about (for good or ill). I think it's a common trend for young people to feel like they're doing something special, and that they have new, exciting answers that the older generations didn't. Then as we mature and learn about more different parts of the world, we tend to become less convinced that we're correct or that the work we're doing is unique. On the other hand, it can still be the case that our work may be a lot more important (relative to our values) than most other things people are doing.

Yeah, that's a fair position to hold. :) The main reason I reject it is that my motivation to prevent torture is stronger than my motivation to care about how my values might change if I were to experience that bliss. Right now I feel the bliss isn't that important, while torture is. I'd rather continue caring about the torture than allow my loyalty to those enduring horrible experiences to be compromised by starting to care about some new thing that I don't currently find very compelling.

There's always a bit of a tricky issue regarding when moral reflection counts as progress and when it counts as just changing your values in ways that your current values would not endorse. At one extreme, it seems that merely learning new factual information (e.g., better data about the number of organisms that exist) is something we should generally endorse. At the other extreme, undergoing neurosurgery or taking drugs to convince you of some different set of values (like the moral urgency of creating paperclips) is generally something we'd oppose. I think having new experiences (especially new experiences that would require rewiring my brain in order to have them) falls somewhere in the middle between these extremes. It's unclear to me how much I should merely count it as new information versus how much I should see it as hijacking my current suffering-focused values. A new hedonic experience is not just new data but also changes one's motivations to some degree.

The other problem with the idea of caring about what we would care about upon further reflection is that what we would care about upon further reflection could be a lot of things depending on exactly how the reflection process occurs. That's not necessarily a reason against moral reflection at all, and I still like to do moral reflection, but it does at least reduce my feeling that moral reflection is definitely progress rather than just value drift.

Thanks. :) When I imagine moderate (not unbearable) pains versus moderate pleasures experienced by different people, my intuition is that creating a small number of new moderate pleasures that wouldn't otherwise exist doesn't outweigh a single moderate pain, but there's probably a large enough number (maybe thousands?) of newly created moderate pleasures that outweighs a moderate pain. I guess that would imply weak NU using this particular thought experiment. (Other thought experiments may yield different conclusions.)

Load More