Brian_Tomasik

Topic Contributions

Comments

Some thoughts on vegetarianism and veganism

That makes sense, and I think many longtermist animal advocates roughly agree. One concern I have is about what kinds of moral ideas vegism is reinforcing. For example, vegism is normally strongly associated with environmentalism, so maybe it reinforces the idea of "leaving wild animals alone" or even trying to increase populations of wild animals via habitat restoration and rewilding.

That said, as Jacy Reese has argued, maybe most animal-like suffering in the far future will be created by humans rather than natural, in which case how people view wild-animal suffering could be less relevant than how they view human-inflicted suffering like that in factory farms. OTOH, I think there's still a question of whether creatures that inhabit virtual worlds or ancestor simulations of the far future would be seen as "wild" or as directly harmed by humans.

Why Charities Usually Don't Differ Astronomically in Expected Cost-Effectiveness

Eight years later, I still think this post is basically correct. My argument is more plausible the more one expects a lot of parts of society to play a role in shaping how the future unfolds. If one believes that a small group of people (who can be identified in advance and who aren't already extremely well known) will have dramatically more influence over the future than most other parts of the world, then we might expect somewhat larger differences in cost-effectiveness.

One thing people sometimes forget about my point is that I'm not making any claims about the sign of impacts. I expect that in many cases, random charities have net negative impacts due to various side effects of their activities. The argument doesn't say that random charities are within 1/100 times as good as the best charities but rather that all charities have a lot of messy side effects on the world, and when those side effects are counted, it's unlikely that the total impact that one charity has on the world (for good or ill) will be many orders of magnitude higher than the total impact that another charity has on the world (for good or ill).

I think maybe the main value of this post is to help people keep in mind how complex the effects of our actions are and that many different people are doing work that's somewhat relevant to what we care about (for good or ill). I think it's a common trend for young people to feel like they're doing something special, and that they have new, exciting answers that the older generations didn't. Then as we mature and learn about more different parts of the world, we tend to become less convinced that we're correct or that the work we're doing is unique. On the other hand, it can still be the case that our work may be a lot more important (relative to our values) than most other things people are doing.

Against Negative Utilitarianism

Yeah, that's a fair position to hold. :) The main reason I reject it is that my motivation to prevent torture is stronger than my motivation to care about how my values might change if I were to experience that bliss. Right now I feel the bliss isn't that important, while torture is. I'd rather continue caring about the torture than allow my loyalty to those enduring horrible experiences to be compromised by starting to care about some new thing that I don't currently find very compelling.

There's always a bit of a tricky issue regarding when moral reflection counts as progress and when it counts as just changing your values in ways that your current values would not endorse. At one extreme, it seems that merely learning new factual information (e.g., better data about the number of organisms that exist) is something we should generally endorse. At the other extreme, undergoing neurosurgery or taking drugs to convince you of some different set of values (like the moral urgency of creating paperclips) is generally something we'd oppose. I think having new experiences (especially new experiences that would require rewiring my brain in order to have them) falls somewhere in the middle between these extremes. It's unclear to me how much I should merely count it as new information versus how much I should see it as hijacking my current suffering-focused values. A new hedonic experience is not just new data but also changes one's motivations to some degree.

The other problem with the idea of caring about what we would care about upon further reflection is that what we would care about upon further reflection could be a lot of things depending on exactly how the reflection process occurs. That's not necessarily a reason against moral reflection at all, and I still like to do moral reflection, but it does at least reduce my feeling that moral reflection is definitely progress rather than just value drift.

Against Negative Utilitarianism

Thanks. :) When I imagine moderate (not unbearable) pains versus moderate pleasures experienced by different people, my intuition is that creating a small number of new moderate pleasures that wouldn't otherwise exist doesn't outweigh a single moderate pain, but there's probably a large enough number (maybe thousands?) of newly created moderate pleasures that outweighs a moderate pain. I guess that would imply weak NU using this particular thought experiment. (Other thought experiments may yield different conclusions.)

Against Negative Utilitarianism

Your point that I simply can't conceive of how good transhuman bliss might be is fair. :) I might indeed change my intuitions if I were to experience it (if that were possible; it'd require a lot of changes to my brain first). I guess we might change our intuitions about many things if we had more insight -- e.g., maybe we'd decide that hedonic experience itself isn't as important as some other things. There's a question of to what extent we would regard these changes of opinion as moral improvements versus corruption of our original values.

I guess I don't feel very motivated by the abstract thought that if I were better able to comprehend transhuman-level bliss I might better see how awesome it is and would therefore be more willing to accept the existence of some additional torture in order for more transhuman bliss to exist. I can see how some people might find that line of reasoning motivating, but to me, my reaction is: "No! Stop the extra torture! That's so obviously the right thing to do."

Against Negative Utilitarianism

To me it seems obvious that an end of life pinprick for ungodly amounts of transhuman bliss would be worth it.

I also have that intuition, probably even if someone else has to endure the pinprick without compensation. But my intuitions about the wrongness of "torture for bliss" are stronger, and if there's a conflict between the intuitions, I'll stick with the wrongness of "torture for bliss".

Thanks for the kind words. :) I hope debate is fun.

Against Negative Utilitarianism

but I'm not sure why that would be relevant to a negative utilitarians' view

People have preferences to have wonderful ends to their lives, to have net positive lives, etc. Those preferences may be frustrated by default (especially the first one; most people don't have wonderful ends to their lives) but would become not frustrated once the bliss was added. People's preferences regarding those things are typically much stronger than their preferences not to experience a single pinprick.

Good point about the babies. One might feel that babies and non-human animals still have implicit preferences for experiencing bliss in the future, but I agree that's a more tenuous claim.

Against Negative Utilitarianism

Regarding the example about bliss before death, there's another complication if we give weight to preference satisfaction even when a person doesn't know whether those preferences have been satisfied. I give a bit of weight to the value of satisfying preferences even if someone doesn't know about it, based on analogies to my case. (For example, I prefer for the world to contain less suffering even if I don't know that it does.)

Many people would prefer for the end of their lives to be wonderful, to experience something akin to heaven, etc, and adding the bliss at the end of their lives -- even unbeknownst to them until it happened -- would still satisfy those preferences. People might also have preferences like "I want to have a net happy life, even though I usually feel depressed" or "I want to have lots of meaningful experiences", and those preferences would also be satisfied by adding the end-of-life bliss.

Against Negative Utilitarianism

Thanks for the replies. :)

if we could make it so that at the end of people's lives they have an experience of unfathomable bliss right before death, containing more well-being than the sum total of all positive experiences that humans have experienced so far, at the cost of one pinprick, it would be extremely good to do so.

If people knew in advance that this would happen, it would relieve a great deal of suffering during people's lives. People could be much less afraid of death because the very end of their lives would be so nice. I imagine that anxiety about death and pain near the end of life without hope of things getting better are some of the biggest sources of suffering in most people's entire lives, so the suffering reduction here could be quite nontrivial.

So I think we'd have to specify that no one would know about this other than the person to whom it suddenly happened. In that case it still seems like probably something most people would strongly prefer. That said, the intuition in favor of it gets weaker if we specify that someone else would have to endure a pinprick with no compensation in order to provide this joy to a different person. And my intuition in favor of doing that is weaker than my intuition against torturing one person to create happiness for other people. (This brings up the open vs empty individualism issue again, though.)

When astronomical quantities of happiness are involved, like one minute of torture to create a googol years of transhuman bliss, I begin to have some doubts about the anti-torture stance, in part because I don't want to give in to scope neglect. That's why I give some moral credence to strongly suffering-focused weak NU. That said, if I were personally facing this choice, I would still say: "No way. The bliss isn't worth a minute of torture." (If I were already in the throes of temptation after a taste of transhuman-level bliss, maybe I'd have a different opinion. Conversely, after the first few seconds of torture, I imagine many people might switch their opinions to saying they want the torture to stop no matter what.)

I do think that the expected amount of torture in the future is smaller than the expected amount of transhuman bliss

I agree, assuming we count their magnitudes the way that a typical classical utilitarian would. It's plausible that the expected happiness of the future as judged by a typical classical utilitarian could be a few times higher than expected suffering, maybe even an order of magnitude higher. (Relative to my moral values, it's obvious that the expected badness of the future will far outweigh the expected goodness -- except in cases where a posthuman future would prevent lots of suffering elsewhere in the multiverse, etc.)

Against Negative Utilitarianism

Should the right-hand-side sum start at i=N+1 rather than i=0, because the utilities at level v occupy the i=0 to i=N slots?

Load More