BT

Brian_Tomasik

1533 karmaJoined Aug 2014

Comments
279

Great post! In addition to biases that increase antagonism, there are also biases that reduce antagonism. For example, the fact that most EAs see each other as friends can blind us to the fact that we may in fact be quite opposed on some important questions. Plausibly this is a good thing, because friendship is a form of cooperation that tends to work in the real world. But I think friendship does make us less likely to notice or worry about large value differences.

As an example, it's plausible to me that the EA movement overall somewhat increases expected suffering in the far future, though there's huge uncertainty about that. Because EAs tend to be friends with one another and admire each other's intellectual contributions, most negative-utilitarian EAs don't worry much about this fact and don't seem to, e.g., try to avoid promoting EA to new people out of concern doing so may be net bad. It's much easier to just get along with your friends and not rock the boat, especially when people with values opposed to yours are the "cool kids" in EA. Overall, I think this friendliness is good, and it would be worse if EAs with different values spent more time trying to fight each other. I myself don't worry much about helping the EA movement, in part because it seems more cooperative to not worry about it too much. But I think it's sensible to at least check every once in a while that you're not massively harming your own values or being taken advantage of.

I think a lot of this comes down to one's personality. If you're extremely agreeable and conflict-averse, you probably shouldn't update even more in that direction from Magnus's article. Meanwhile, if you tend to get into fights a lot, you probably should lower your temperature, as Magnus suggests.

Thanks! Good to know. If you're just buying eyeballs, then there's roughly unlimited room for more funding (unless you were to get a lot bigger), so presumably there'd be less reason for funging dynamics. (And I assume you don't receive much or any money from big EA animal donors anyway.)

I'm honored that you're honored. :) Thanks for the work you do and for your answer here!

there are certain large grantors that I have been told prefer to fund nonprofits that already have raised at least a certain amount from other sources

Are those EA grantors? Or maybe you prefer not to say.

That makes sense about how more donors helps with fundraising. I wonder if that's more true for a startup charity that has to demonstrate its legitimacy, while for a larger and more established charity, maybe it could go the other way?

Makes sense about ex ante vs ex post. :)

Are you more optimistic that various different kinds of reflection would tend to yield a fair amount of convergence? Or that our descendants will in fact undertake reflection on human values to a significant degree?

Makes sense. :) There are at least two different reasons why one might discourage taking more than one's fair share:

  1. Epistemic: As you said, there may be "collective wisdom" that an individual donor is missing.
  2. Game theoretic: If multiple donors who have different values compete in a game of chicken, this could be worse for all of them than if they can agree to cooperate.

Point #1 may be a reason to not try to outcompete others purely for its own sake. However, reason #2 depends on whether other donors are in fact playing chicken and whether it's feasible to achieve cooperation. If you genuinely have different values from other donors, you should try to do the best you can by your own values, which could include taking advantage of opportunities to donate less than your "fair" share.

It's easy to feel warm fuzzies toward being "fair", but we can imagine scenarios where those fuzzies don't apply. For example, imagine that the USA and Russia are both contributing development aid to an international organization, and with any funds left over, Russia will buy attack drones from Iran. If there's an opportunity to get Russia to contribute more than its "fair share" to the development aid, leaving less money left over for drones, the USA should try to do that.

Maybe being the kind of person who would never even consider aiming to gain some advantage for one's own values is more effective at making cooperation actually happen, but being such a person could also lead to getting exploited. It seems non-obvious how exactly to best ensure that each party gives its fair share, especially when there are so many different possible donors to keep track of, and we have no way of knowing how much each entity would have contributed on its own.

Your question is fairly relevant to the discussion because if I thought there was net positive value in the lives of wild animals, then I would have a lot fewer concerns about non-welfare-reform animal charities.

I've had it on my todo list to check out that video and paper, but I probably won't get to it any time soon, so for now I'll just reply to the slides you asked about. :)

Personally I would not want to live even as the two surviving adult fish, because they probably experience a number of moments of extreme suffering, at least during death if not earlier. They may be fearful of predators, they face unpleasant temperatures or other bad environmental conditions without being able to control them the way humans do with air conditioning and heating, they may face long periods of low food, there might be intraspecific aggression and sexual harassment (I don't know if those behaviors apply to Atlantic cod, but they are common in some fish species), and there would be many other hardships. Most of these moments of suffering probably wouldn't feel that bad, but a few of them might be unbearably awful.

I said that I "personally" wouldn't want to live as one of these surviving fish, but you might say that the real question is whether they would want to have these lives rather than not existing. We can't ask fish that question, but if we imagine humans having similar lives as these fish, we could ask such humans that question. Maybe many of those humans would say during many moments of their lives that they were on balance glad to exist. However, I suspect that during some moments, such as the peak pain of dying, they would often change their minds and wish they hadn't existed. Therefore, there's no single individual whom we can ask whether his/her life was net positive; there are multiple "individuals" within the animal's life, some of whom are glad to exist and some of whom are horrified to exist. How we weigh up these conflicting opinions is ultimately a judgment call, and no amount of further empirical data on wild-animal welfare will resolve it. I take a suffering-focused approach to this dilemma and say that it's not acceptable for the happier moments of the animal's life to impose unbearable suffering on some other moments of the animal's life. So for any animal that has moments of unmitigated, unbearable agony (as most animals do, if only when dying), its life is net negative in my view.

But most people don't take this suffering-focused approach. Many people think enough happy moments of life can outweigh something as awful as being eaten alive. So next I'll discuss the specific numbers in those slides.

If a baby fish is only enduring 10 seconds of agony when dying, as the first slide suggests, then it's presumably dying from predation (or maybe a severe physical injury like being crushed or something). The next slide suggests that maybe the pain of predation is 100/100, compared against a presumed positive welfare of 0.1/100 for ordinary life. So getting eaten alive is only 1000 times worse than the goodness of a typical moment of life. That might seem plausible if we only glance at the numbers, but it's not at all plausible if we actually think about what it implies. Imagine that you endure getting eaten alive for 1 minute. These numbers say that a mere 1000 minutes of ordinary life could compensate for that. 1000 minutes is 16.7 hours, slightly more than the amount of time a typical person is awake in a day. So this ratio says that even if you spend a minute every day experiencing what it's like to be eaten alive, then your life can still be welfare-neutral. I wonder if anyone would actually sign up for that. One of the least suffering-averse trade ratios I've heard someone endorse was that he'd be willing to experience being eaten alive for an extra week of life (IIRC; that conversation was a long time ago). (I guess there are also a few people who say even more extreme things like "I'd rather be alive and tortured forever than not exist", though I expect they'd change their minds pretty quickly when the torture started.)

One possible argument is that it's illegitimate to rely on our human intuitions about this tradeoff, because r-strategists may have evolved different pain-pleasure trade ratios based on the situation they face. For example, almost all fish babies will die by default, so if there's an opportunity to take a dangerous risk in order to gain some slight advantage, they should probably take it, since they have almost no chance of winning otherwise. Therefore, maybe they need to be less averse to suffering (or at least less afraid of suffering) than we would be. This might be the case, but it's a very theoretical argument, so I'm wary of putting too much stock in it. Of course, any estimate we have of how much suffering and pleasure exist in nature will be very speculative, so if I were a classical utilitarian who thought a minute of extreme suffering might be outweighable by a few days, weeks, or months of ordinary life in the wild, then I would have some uncertainty about the net hedonic balance of nature. But in my own case, I don't think it's ok to force extreme suffering on one for the pleasure of another -- much less imposing extreme suffering on 1,999,998 for the pleasure of 2. (If we assume a 10% hatch rate and a 10% chance of sentience, then this comparison is actually 19,999.98 vs 2. And if we look at individual organism-moments of experience, the 2 surviving fish have a lot of organism-moments.)

As I mentioned, the slides seem to be assuming deaths by predation given how short the duration of suffering is. Death by almost anything else would probably take hours, days, weeks, etc, although the intensity of pain during that time would usually be a lot lower than the intensity of pain during predation. This article says:

A new study has uncovered the reason why 90 percent of fish larvae are biologically doomed to die mere days after hatching. This understanding of the mechanism that kills off the majority of the world's fish larvae may help find a solution to the looming fish crisis in the world. The research suggests that "hydrodynamic starvation," or the physical inability to feed due to environmental incompatibility, is the reason so many fish larvae perish.

So maybe rather than 10 seconds, the period of pain while dying should be measured in hours or days? 1 day = 86,400 seconds. Of course, the badness of most of those seconds would be a lot less than 100/100.

See also: "Is There More Suffering Than Happiness in Nature? A Reply to Michael Plant".

Thanks! I'm confused about the acausal issue as well :) , and it's not my specialty. I agree that acausal trade (if it's possible in practice, which I'm uncertain about) could add a lot of weird dynamics to the mix. If someone was currently almost certain that Earth-originating space colonization was net bad, then this extra variance should make such a person less certain. (But it should also make people less certain who think space colonization is definitely good.) My own probabilities for Earth-originating space colonization being net bad vs good from a negative-utilitarian (NU) perspective are like 65% vs 35% or something, mostly because it's very hard to have much confidence in the sign of almost anything. (I think my own work is less than 65% likely to reduce net suffering rather than increase it.) Since you said your probabilities are like 60% vs 40%, maybe we're almost in agreement? (That said, the main reason I think Earth-originating space colonization might be good is that there may be a decent chance of grabby aliens within our future light cone whom we could prevent from colonizing, and it seems maybe ~50% likely an NU would prefer for human descendants to colonize than for the aliens to do so.)

My impression (which could be wrong) is that ECL, if it works, can only be a good thing for one's values, but generic acausal trade can cause harm as well as benefit. So I don't think the possibility of future acausal trade is necessarily a reason to favor Earth-originating intelligence (even a fully NU intelligence) from reaching the stars, but I haven't studied this issue in depth.

I suspect that preserving one's goals across multiple rounds of building smarter successors is extremely hard, especially in a world as chaotic and multipolar as ours, so I think the most likely intelligence to originate from Earth will be pretty weird relative to human values -- some kind of Moloch creature. Even if something like human values does retain control, I expect NUs to represent a small faction. The current popularity of a value system (especially among intelligent young people) seems to me like a good prior for how popular it will be in the future.

I think people's values are mostly shaped by emotions and intuitions, with rational arguments playing some role but not a determining role. If rational arguments were decisive, I would expect more convergence among intellectuals about morality than we in fact see. I'm mostly NU based on my hard wiring and life experiences, rather than based on abstract reasons. People sometimes become more or less suffering-focused over time due to a combination of social influence, life events, and philosophical reflection, but I don't think philosophy alone could ever be enough to create agreement one way or the other. Many people who are suffering-focused came to that view after experiencing significant life trauma, such as depression or a painful medical condition (and sometimes people stop being NU after their depression goes away). Experiencing such life events could be part of a reflection process, but experiencing other things that would reduce the salience of suffering would also be part of the reflection process, and I don't think there's any obvious attractor here. It seems to me more like causing random changes of values in random directions. The output distribution of values from a reflection process is probably sensitive to the input distribution of values and the choice of parameters regarding what kinds of thinking and life experiences would happen in what ways.

In any case, I don't think ideal notions of "values on reflection" are that relevant to what actually ends up happening on Earth. Even if human values control the future, I assume it will be in a similar way as they control the present, with powerful and often self-interested actors fighting for control, mostly in the economic and political spheres rather than by sophisticated philosophical argumentation. The idea that a world that can't stop nuclear weapons, climate change, AI races, or wars of aggression could somehow agree to undertake and be bound by the results of a Long Reflection seems prima facie absurd to me. :) Philosophy will play some role in the ongoing evolution of values, but so will lots of other random factors. (To the extent that "Long Reflection" just means an ideal that a small number of philosophically inclined people try to crudely approximate, it seems reasonable. Indeed, we already have a community of such people.)

In reading more about this topic, I discovered that there has already been a lot of discussion about donor coordination on the EA Forum that I missed. (I don't read the Forum very actively.) EAs generally think it's bad to engage in a game of chicken where you try to let other people fund something first, at least within the EA community -- e.g., Cotton-Barratt (2021).

My original thought behind making this post was that the extent of funging for animal donations seemed like a useful thing for various animal donors to be aware of, to be more informed about their giving choices. However, I can imagine that some people see it as a net-negative topic to bring up, because it may encourage more games of donation chicken among donors. My post also mentioned my criticisms of some existing EA animal charities, but I could have done that separately from the discussion of donation funging. I still think it's reasonable for people to be more informed about how funging works, but I also see the downside of broadcasting that discussion.

it's total on-farm deaths that matter more to me than the rates, so just increasing the prices enough could reduce demand enough to reduce those deaths.

If cage-free hens are less productive, then there might still be more total deaths in cage-free despite higher prices?

I don't have a copy of the book to check, but I think Compassion, by the Pound says that cage-free hens lay fewer eggs.

A 2006 study gives some specific numbers, although this is for free-range rather than cage-free:

Layers from the free range system, compared to those kept in cages, laid fewer eggs, (266:295), [...] they had higher mortality rate (6.80 % : 5.50 %)

These sources are 1-2 decades old, so maybe things have changed since then, though probably the trend of cage-free hens being somewhat less productive remains true.

Good to know! Are there any other slaughter-focused groups besides HSA? Maybe you mean groups for which one of their major priorities is slaughter, like Shrimp Welfare Project and various other charities working on chickens and fish?

I saw a 2021 Open Phil grant "to Animal Protection Denmark to support research on ways to improve the welfare of wild-caught fish." But that organization itself does lots of stuff (including non-farm-animal work).

Off topic: There's a line in the movie A Cinderella Story: Christmas Wish that might be applicable to you: "was also credited with helping shift the Animal Rights movement to a more utilitarian focus including a focus on chicken."

Load more