All of JesseClifton's Comments + Replies

In principle the proposal in that post is supposed to encompass a larger set of bracketing-ish things than the proposal in this post, e.g., bracketing out reasons that are qualitatively weaker in some sense. But the latter kind of thing isn't properly worked out.

Good question! Yeah, I can’t think of a real-world process about which I’d want to have maximally imprecise beliefs. (The point of choosing a “demon” in the example is that we would have good reason to worry the process is adversarial if we’re talking about a demon…)

(Is this supposed to be part of an argument against imprecision in general / sufficient imprecision to imply consequentialist cluelessness? Because I don’t think you need anywhere near maximally imprecise beliefs for that. The examples in the paper just use the range [0,1] for simplicity.)

It sounds like you reject this kind of thinking:

cluelessness about some effects (like those in the far future) doesn’t override the obligations given to us by the benefits we’re not clueless about, such as the immediate benefits of our donations to the global poor

I don't think that's unreasonable. Personally, I strongly have the intuition expressed in that quote, though definitely not certain that I will endorse it on reflection.

Wouldn't the better response be to find things we aren't clueless about

The background assumption in this post is that there are n... (read more)

2
JackM
I do reject this thinking because it seems to imply either: * Embracing non-consequentialist views: I don't have zero credence in deontology or virtue ethics, but to just ignore far future effects I feel I would have to have very low credence in consequentialism, given the expected vastness of the future. * Rejecting impartiality: For example, saying that effects closer in time are inherently worth more than those farther away. For me, utility is utility regardless of who enjoys it or when. There's certainly a lot of stuff out there I still need to read (thanks for sharing the resources), but I tend to agree with Hilary Greaves that the way to avoid cluelessness is to target interventions whose intended long-run impact dominates plausible unintended effects. For example, I don't think I am clueless about the value of spreading concern for digital sentience (in a thoughtful way). The intended effect is to materially reduce the probability of vast future suffering in scenarios that I assign non-trivial probability. Plausible negative effects, for example people feeling preached to about something they see as stupid leading to an even worse outcome, seem like they can be mitigated / just don't compete overall with the possibility that we would be alerting society to a potentially devastating moral catastrophe. I'm not saying I'm certain it would go well (there is always ex-ante uncertainty), but I don't feel clueless about whether it's worth doing or not. And if we are helplessly clueless about everything, then I honestly think the altruistic exercise is doomed and we should just go and enjoy ourselves.

Thanks, Ben!

It depends on what the X is. In most real-world cases I don’t think our imprecision ought to be that extreme. (It will also be vague, not “[0,1]” or “(0.01, 0.99)” but, “eh, seems like lots of different precise beliefs are defensible as long as they’re not super close to 1 or 0”, and in that state it will feel reasonable to say that we should strictly prefer such an extreme bet.)

But FWIW I do think there are hypothetical cases where incomparability looks correct. Suppose a demon appears to me and says “The F of every X is between 0 and 1. What’... (read more)

2
Michael St Jules 🔸
FWIW, unless you have reason otherwise (you may very well think some Fs are more likely than others), there's some symmetry here between any function F and the function 1-F, and I think if you apply it, you could say P(F > 1/2) = P(1-F < 1/2) = P(F < 1/2), so P(F < 1/2) ≤ 1/2, and strictly less iff P(F = 1/2) > 0. If you can rule out P(F = 1/2) > 0 (say by an additional assumption), or the bet were on F ≤ 1/2 instead of F < 1/2, then the probability would just be 1/2.
2
Ben_West🔸
Thanks jesse. Is there a way that we could actually do this? Like choose some F(X) which is unknown to both of us but guaranteed to be between 0 and 1, and if it's less than 1/2 I pay you a dollar and if it's greater than 1/2 you pay me some large amount of money.  I feel pretty confident I would take that bet if the selection of F was not obviously antagonistic towards me, but maybe I'm not understanding the types of scenarios you are imagining.

Some reasons why animal welfare work seems better:

  • I put some weight on a view which says: “When doing consequentialist decision-making, we should set the net weight of the reasons we have no idea how to weigh up (e.g., long-run flowthrough effects) to zero.” This probably implies restricting attention to near-term consequences, and animal welfare interventions seem best for that. (I just made a post that discusses this approach to decision-making.)
    • I think this kind of view is hard to make theoretically satisfying, but it does a good enough job of capt
... (read more)

(Thanks! Haven't forgotten about this, will try to respond soon.)

Thanks for this! IMO thinking about what it even means to do good under extreme uncertainty is still underrated.

I don’t see how this post addresses the concern about cluelessness, though.

My problem with the construction analogy is: Our situation is more like, whenever we place a brick we might also be knocking bricks out of other parts of the house. Or placing them in ways that preclude good building later. So we don’t know if we’re actually contributing to the construction of the house on net.

On your takeaway at the bottom, it seems to be: “if someone doi... (read more)

2
finm
Thanks! I'm not trying to resolve concerns around cluelessness in general, and I agree there are situations (many or even most of the really tough ‘cluelessness’ cases) where the whole ‘is this constructive?’ test isn't useful, since that can be part of what you're clueless about, or other factors might dominate. Well, I'm saying the ‘is this constructive’ test is a way to latch on to a certain kind of confidence, viz the confidence that you are moving towards a better world. If others also take constructive actions towards similar outcomes, and/or in the fullness of time, you can be relatively confident you helped get to that better world. This is not the same thing as saying your action was right, since there are locally harmful ways to move toward a better world. And so I don't have as much to say about when or how much to privilage this rule!

We at CLR are now using a different definition of s-risks.

New definition:

S-risks are risks of events that bring about suffering in cosmically significant amounts. By “significant”, we mean significant relative to expected future suffering.

Note that it may turn out that the amount of suffering that we can influence is dwarfed by suffering that we can’t influence. By “expectation of suffering in the future” we mean “expectation of action-relevant suffering in the future”.

2
Stefan_Schubert
I'm wondering a bit about this definition. One interpretation of it is that you're saying something like this: "The expected future suffering is X. The risk that event E occurs is an S-risk if and only if E occurring raises the expected future suffering significantly above X." But I think that definition doesn't work. Suppose that it is almost certain (99,9999999%) that a particular event E will occur, and that it would cause a tremendous amount of suffering. Then the expected future suffering is already very large (if I understand that concept correctly). And, because E is virtually certain to occur, it occurring will not actually bring about suffering in cosmically significant amounts relative to expected future suffering. And yet intuitively this is an S-risk, I'd say. Another interpretation of the definition is: "The expected future suffering is X. The risk that event E occurs is an S-risk if and only if the difference in suffering between E occurring and E not occurring is significant relative to X." That does take care of that issue, since, by hypothesis, the difference between E occurring and E not occurring is  a tremendous amount of suffering. Alternatively, you may want to say that the risk that E occurs is an S-risk if and only if occurring brings about a significant amount of suffering relative to what we expect to occur from other causes. That may be a more intuitive way of thinking about this. A feature of this definition is that the risk of an event E1 occurring can be S-risk even if it occurring would cause much less suffering than  another event E2 would, provided that E1 is much more likely to occur than E2. But if we increase our credence that E2 will occur, then the risk of E1 occurring will cease to be an S-risk, since it no longer will cause a significant amount of suffering relative to expected future suffering. I guess that some would find that unintuitive, and that something being an S-risk shouldn't depend on us adjusting our creden

I found it surprising that you wrote: …

Because to me this is exactly the heart of the asymmetry. It’s uncontroversial that creating a person with a bad life inflicts on them a serious moral wrong. Those of us who endorse the asymmetry don’t see such a moral wrong involved in not creating a happy life.

+1. I think many who have asymmetric sympathies might say that there is a strong aesthetic pull to bringing about a life like Michael’s, but that there is an overriding moral responsibility not to create intense suffering.

Very late here, but a brainstormy thought: maybe one way one could start to make a rigorous case for RDM is to suppose that there is a “true” model and prior that you would write down if you had as much time as you needed to integrate all of the relevant considerations you have access to. You would like to make decisions in a fully Bayesian way with respect to this model, but you’re computationally limited so you can’t. You can only write down a much simpler model and use that to make a decision.

We want to pick a policy which, in some sense, has low regret... (read more)

3
Michael St Jules 🔸
This makes sense to me, although I think we may not be able to assume a unique "true" model and prior even after all the time we want to think and use information that's already accessible. I think we could still have deep uncertainty after this; there might still be multiple distributions that are "equally" plausible, but no good way to choose a prior over them (with finitely many, we could use a uniform prior, but this still might seem wrong), so any choice would be arbitrary and what we do might depend on such an arbitrary choice. For example, how intense are the valenced experiences of insects and how much do they matter? I think no amount of time with access to all currently available information and thoughts would get me to a unique distribution. Some or most of this is moral uncertainty, too, and there might not even be any empirical fact of the matter about how much more intense one experience is than another (I suspect there isn't). Or, for the US election, I think there was little precedent for some of the considerations this election (how coronavirus would affect voting and polling), so thinking much more about them could have only narrowed the set of plausible distributions so much. I think I'd still not be willing to commit to a unique AI risk distribution with as much time as I wanted and perfect rationality but only the information that's currently accessible. See also this thread.

nil already kind of addressed this in their reply, but it seems important to keep in mind the distinction between the intensity of a stimulus and the moral value of the experience caused by the stimulus. Statements like “experiencing pain just slightly stronger than that threshold” risk conflating the two. And, indeed, if by “pain” you mean “moral disvalue” then to discuss pain as a scalar quantity begs the question against lexical views.

Sorry if this is pedantic, but in my experience this conflation often muddles discussions about lexical views.

0
Anthony DiGiovanni
Good point. I would say I meant intensity of the experience, which is distinct both from intensity of the stimulus and moral (dis)value. And I also dislike seeing conflation of intensity with moral value when it comes to evaluating happiness relative to suffering.

Some Bayesian statisticians put together prior choice recommendations. I guess what they call a "weakly informative prior" is similar to your "low-information prior".

Nice comment; I'd also like to see a top-level post.

One quibble: Several of your points risk conflating "far-future" with "existential risk reduction" and/or "AI". But there is far-future work that is non-x-risk focused (e.g. Sentience Institute and Foundational Research Institute) and non-AI-focused (e.g. Sentience Institute) which might appeal to someone who shares some of the concerns you listed.

Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.

I’m having difficulty parsing the statement you’ve attributed to me, or mapping it what I’ve said. In any case, I think many people share the intuition that “frequentist” properties of one’s credences matter. People care about calibration training and Brier scores, for instance. It’s not immediately clear to me why it’s nonsensical to say “P is my credence, but should I trust it?”

It sounds to me like this scenario is about a difference in the variances of the respective subjective probability distributions over future stock values. The variance of a distribution of credences does not measure how “well or poorly supported by evidence” that distribution is.

My worry about statements of the form “My credences over the total future utility given intervention A are characterized by distribution P” does not have to do with the variance of the distribution P. It has to do with the fact that I do not know whether I should trust the procedures that generated P to track reality.

2
kbog
Well in this case at least, it is apparent that the differences are caused by how well or poorly supported people's beliefs are. It doesn't say anything about variance in general. Distribution P is your credence. So you are saying "I am worried that my credences don't have to do with my credence." That doesn't make sense. And sure we're uncertain of whether our beliefs are accurate, but I don't see what the problem with that is.

whether you are Bayesian or not, it means that the estimate is robust to unknown information

I’m having difficulty understanding what it means for a subjective probability to be robust to unknown information. Could you clarify?

subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models.

Could you give an example where two Bayesians have the same subjective probabilities, but SEUT tells us that one subjective probability is better than the other due to better robustness / resulting from a better model / etc.?

2
kbog
It means that your credence will change little (or a lot) depending on information which you don't have. For instance, if I know nothing about Pepsi then I may have a 50% credence that their stock is going to beat the market next month. However, if I talk to a company insider who tells me why their company is better than the market thinks, I may update to 55% credence. On the other hand, suppose I don't talk to that guy, but I did spend the last week talking to lots of people in the company and analyzing a lot of hidden information about them which is not available to the market. And I have found that there is no overall reason to expect them to beat the market or not - the info is good just as much as it is bad. So I again have a 50% credence. However, if I talk to that one guy who tells me why the company is great, I won't update to 55% credence, I'll update to 51% or not at all. Both people here are being perfect Bayesians. Before talking to the one guy, they both have 50% credence. But the latter person has more reason to be surprised if Pepsi diverges from the mean expectation.

For a Bayesian, there is no sense in which subjective probabilities are well or poorly supported by the evidence, unless you just mean that they result from calculating the Bayesian update correctly or incorrectly.

Likewise there is no true expected utility to estimate. It is a measure of an epistemic state, not a feature of the external world.

I am saying that I would like this epistemic state to be grounded in empirical reality via good models of the world. This goes beyond subjective expected utility theory. As does what you have said about robustness and being well or poorly supported by evidence.

2
kbog
Yes, whether you are Bayesian or not, it means that the estimate is robust to unknown information. No, subjective expected utility theory is perfectly capable of encompassing whether your beliefs are grounded in good models. I don't see why you would think otherwise. No, everything that has been written on the optimizer's curse is perfectly compatible with subjective expected utility theory.

But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse.

I'm not sure what you mean. There is nothing being estimated and no concept of robustness when it comes to the notion of subjective probability in question.

5
kbog
The expected value of your actions is being estimated. Those estimates are based on subjective probabilities and can be well or poorly supported by evidence.

I can’t speak for the author, but I don’t think the problem is the difficulty of “approximating” expected value. Indeed, in the context of subjective expected utility theory there is no “true” expected value that we are trying to approximate. There is just whatever falls out of your subjective probabilities and utilities.

I think the worry comes more from wanting subjective probabilities to come from somewhere — for instance, models of the world that have a track-record of predictive success. If your subjective probabilities are not grounded in such a ... (read more)

4
kbog
But that just means that people are making estimates that are insufficiently robust to unknown information and are therefore vulnerable to the optimizer's curse. It doesn't imply that taking the expected value is not the right solution to the idea of cluelessness.

I'll second this. In double cruxing EV calcs with others it is clear that they are often quite parameter sensitive and that awareness of such parameter sensitivity is rare/does not come for free. Just the opposite, trying to do sensitivity analysis on what are already fuzzy qualitative->quantitative heuristics is quite stressful/frustrating. Results from sufficiently complex EV calcs usually fall prey to ontology failures, ie key assumptions turned out wrong 25% of the time in studies on analyst performance in the intelligence community, and most scenarios have more than 4 key assumptions.

Thanks for writing this. I think the problem of cluelessness has not received as much attention as it should.

I’d add that, in addition to the brute good and x-risks approaches, there are approaches which attempt to reduce the likelihood of dystopian long-run scenarios. These include suffering-focused AI safety and values-spreading. Cluelessness may still plague these approaches, but one might argue that they are more robust to both empirical and moral uncertainty.

2
Milan Griffes
Good point, I was implicitly considering s-risks as a subset of x-risks.

Lazy solutions to problems of motivating, punishing, and experimenting on digital sentiences could also involve astronomical suffering.

Right, I'm asking how useful or dangerous your (1) could be if it didn't have very good models of human psychology - and therefore didn't understand things like "humans don't want to be killed".

Great piece, thank you.

Regarding "learning to reason from humans", to what extent do you think having good models of human preferences is a prerequisite for powerful (and dangerous) general intelligence?

Of course, the motivation to act on human preferences is another matter - but I wonder if at least the capability comes by default?

0
Daniel_Dewey
My guess is that the capability is extremely likely, and the main difficulties are motivation and reliability of learning (since in other learning tasks we might be satisfied with lower reliability that gets better over time, but in learning human preferences unreliable learning could result in a lot more harm).
0
WillPearson
My own 2 cents. It depends a bit what form of general intelligence is made first. There are at least two possible models. 1. Super intelligent agent with a specified goal 2. External brain lobe With the first you need to be able to specify a human preferences in the form of a goal. Which enables it to pick the right actions. The external brain lobe would start not very powerful and not come with any explicit goals but would be hooked into the human motivational system and develop goals shaped by human preferences. HRAD is explicitly about the first. I would like both to be explored.

Have animal advocacy organizations expressed interest in using SI's findings to inform strategic decisions? To what extent will your choices of research questions be guided by the questions animal advocacy orgs say they're interested in?

1
Jacy
We've been in touch with most EAA orgs (ACE, OPP, ACE top/standout charities) and they have expressed interest. We haven't done many hard pitches so far like, "The research suggests X. We think you should change your tactics to reflect that, by shifting from Y to Z, unless you have evidence we're not aware of." We hope to do that in the future, but we are being cautious and waiting until we have a little more credibility and track record. We have communicated our findings in softer ways to people who seem to appreciate the uncertainty, e.g. "Well, our impression of this social movement is that it's evidence for Z tactics, but we haven't written a public report on that yet and it might change by the time we finish the case study." I (Jacy) would guess that our research-communication impact will be concentrated in that small group of animal advocacy orgs who are relatively eager to change their minds based on research, and perhaps in an even smaller group (e.g. just OPP and ACE). Their interests do influence us to a large extent, not just because it's where they're more open to changing their minds, but because we see them as intellectual peers. There are some qualifications we account for, such as SI having a longer-term focus (in my personal opinion, not sure they'd agree) than OPP's farmed animal program or ACE. I'd say that the interests of less-impact-focused orgs are only a small factor, since the likelihood of change and potential magnitude of change seem quite small.

Strong agreement. Considerations from cognitive science might also help us to get a handle on how difficult the problem of general intelligence is, and the limits of certain techniques (e.g. reinforcement learning). This could help clarify our thinking on AI timelines as well as the constraints which any AGI must satisfy. Misc. topics that jump to mind are the mental modularity debate, the frame problem, and insight problem solving.

This is a good article on AI from a cog sci perspective: https://arxiv.org/pdf/1604.00289.pdf

0
Kaj_Sotala
Yay, correctly guessed which article that was before clicking on the link. :-)

Yes, I think you're right, at least when prices are comparable.

More quick Bayes: Suppose we have a Beta(0.01, 0.32) prior on the proportion of people who will pledge. I choose this prior because it gives a point-estimate of a ~3% chance of pledging, and a probability of ~95% that the chance of pledging is less than 10%, which seems prima facie reasonable.

Updating on your data using a binomial model yields a Beta(0.01, 0.32 + 14) distribution, which gives a point estimate of < 0.1% and a ~99.9% probability that the true chance of pledging is less than 10%.

Thanks for writing this up.

The estimated differences due to treatment are almost certainly overestimates due to the statistical significance filter (http://andrewgelman.com/2011/09/10/the-statistical-significance-filter/) and social desirability bias.

For this reason and the other caveats you gave, it seems like it would be better to frame these as loose upper bounds on the expected effect, rather than point estimates. I get the feeling people often forget the caveats and circulate conclusions like "This study shows that $1 donations to newspaper ... (read more)

...as such it reads "There are many important things being neglected. This is an important thing. Therefore it is the most important thing to do."

I never meant to say that spreading anti-speciesism is the most important thing, just that it's still very important and it's not obvious that its relative value has changed with the election.

Trump may represent an increased threat to democratic norms and x-risk, but that doesn't mean the marginal value of working in those areas has changed. Perhaps it has. We'd need to see concrete examples of how EAs who previously had a comparative advantage in helping animals now can do better by working on these other things.

my personal position on animal advocacy is that the long-term future of animals on Earth is determined almost entirely by how much humans have their shit together in the long run

This may be true of massive systemic changes for an... (read more)

1
Ben Pace
I'd like to see an analysis of exactly what the opportunity costs are there, before endorsing one. This analysis has no differential analysis, and as such it reads "There are many important things being neglected. This is an important thing. Therefore it is the most important thing to do."

Agreed that large updates about things like the prevalence of regressive attitudes and the fragility of democracy should have been made before the election. But Trump's election itself has changed many EA-relevant parameters - international cooperation, x-risk, probability of animal welfare legislation, environmental policy, etc. So there may be room for substantial updates on the fact that Trump and a Republican Congress will be governing.

That said, it's not immediately obvious to me how the marginal value of any EA effort has changed, and I worry about major updates being made out of a kneejerk reaction to the horribleness of someone like Trump being elected.

I'd be interested to hear a case for moving from animal advocacy to politics. If your comparative advantage was in animal advocacy before the election, it's not immediately obvious to me that switching makes sense.

In the short term, animal welfare concerns dominate human concerns, and your marginal contribution to animal welfare via politics is unclear: welfare reform in the US is happening mostly through corporate reform, and it's dubious that progressive politics is even good for wild animals due to the possible harms of environmentalism.

Looking far... (read more)

8
Qiaochu_Yuan
The case is for defending the conditions under which it's even possible to have a group of privileged people sitting around worrying about animal advocacy while the world is burning. To the extent that you think 1) Trump is a threat to democratic norms (as described e.g. by Julia Galef )/ risks nuclear war etc. and isn't just a herald of more conservative policy, and 2) most liberals galvanized by the threat of Trump are worrying more about the latter than the former, there's room for EAs to be galvanized by the threat of Trump in a more bipartisan way, as described e.g. by Paul Christiano. (In general, my personal position on animal advocacy is that the long-term future of animals on Earth is determined almost entirely by how much humans have their shit together in the long run, and that I find it very difficult to justify working directly to save animals now relative to working to help humans get their shit more together.)

Thank you for opening this discussion.

It’s not clear to me that animal advocacy in general gets downweighted:

-For the short term, wild and farmed animal welfare dominates human concerns. I'd be interested to hear a case that animals are better served by some EAs switching to progressive politics more generally. I'm doubtful that EA contributions to politics would indirectly benefit welfare reform and wild animal suffering efforts. Welfare reform in the United States is taking place largely through corporate reform. The impact of progressive vs conserva... (read more)

What do you mean by "too speculative"? You mean the effects of agriculture on wildlife populations are speculative? The net value of wild animal experience is unclear? Why not quantify this uncertainty and include it in the model? And is this consideration that much more speculative than the many estimates re: the far future on which your model depends?

Also, "I thought it was unlikely that I'd change my mind" is a strange reason for not accounting for this consideration in the model. Don't we build models in the first place because we don't trust such intuitions?

2
MichaelDickens
I don't actually remember what I meant by "too speculative", that was probably not a helpful thing to say. There are thousands of semi-plausible things I could try to model. It would not be worth the effort to try to model all of them. I have to do some pre-prioritization about which things to model, so I pick the things that seem the most important. Specifically, I think about the expected value of modeling something (how likely am I to change my mind and how much does it matter if I do?) and how long it will take to see if it seems worth it. Sometimes I do this explicitly and sometimes implicitly. I don't really know how likely I am to change my mind, but I can make a rough guess. It wouldn't make sense to try to model thousands of things on the off chance that any of them could change my mind. If you would like to see how factory farming affects wild animal suffering, by all means create a cost-effectiveness estimate and share it.

Thanks for writing this up! Have you taken into account the effects of reductions in animal agriculture on wildlife populations? I didn't see terms for such effects in your cause prioritization app.

0
MichaelDickens
I'm certainly aware of that consideration. I didn't include it because it seemed too speculative and not worth the effort (I thought it was unlikely that I'd change my mind about anything based on the result). I don't have a monopoly on cost-effectiveness calculations though, you can write your own, or even fork my code if you know a little C++.

It's possible that preventing human extinction is net negative. A classical utilitarian discusses whether the preventing human extinction would be net negative or positive here: http://mdickens.me/2015/08/15/is_preventing_human_extinction_good/. Negative-leaning utilitarians and other suffering-focused people think the value of the far-future is negative.

This article contains an argument for time-discounted utilitarianism: http://effective-altruism.com/ea/d6/problems_and_solutions_in_infinite_ethics/. I'm sure there's a lot more literature on this, that... (read more)

4[anonymous]
For something so important, it seems this question is hardly ever discussed. The only literature on the issue is a blog post? It seems like it's often taken for granted that x-risk reduction is net positive. I'd like to see more analysis on whether non-negative utilitarians should support x-risk reduction.
3
MichaelDello
Thanks Jesse, I definitely should also have said that I'm assuming preventing extinction is good. My broad position on this is that the future could be good, or it could be bad, and I'm not sure how likely each scenario is, or what the 'expected value' of the future is. Also agreed that utilitarianism isn't concerned with selfishness, but from an individual's perspective, I'm wondering if what Alex is doing in this case might be classed that way.

Examining the foundations of the practical reasoning used (and seemingly taken for granted) by many EAs seems highly relevant. Wish we saw more of this kind of thing.

Have you seen Brian Tomasik's work on 1) the potential harms of environmentalism for wild animals, and 2) the effects of climate change on wild animal suffering?

e.g. http://reducing-suffering.org/climate-change-and-wild-animals/ http://reducing-suffering.org/applied-welfare-biology-wild-animal-advocates-focus-spreading-nature/

You don't think directing thousands of dollars to effective animal charities has made any difference? Or spreading effectiveness-based thinking in the animal rights community (e.g. the importance of focusing on farm animals rather than, say, shelter animals)? Or promoting cellular agriculture and plant-based meats?

As for wild animal suffering: there are a few more than 5-10 people who care (the Reducing WAS FB group has 1813 members), but yes, the community is tiny. Why does that mean thinking about how to reduce WAS accomplishes nothing? Don't you thi... (read more)

What do you think of the effort to end factory farming? Or Tomasik et al's work on wild animal suffering? Do you think these increase rather than decrease suffering?

-4
Ant_Colony
I don't know how much they actually change. Everybody kind of agrees factory farms are evil but the behavior doesn't really seem to change much from that. Not sure the EA movement made much of a difference in this regard. As for wild animal suffering, there are ~5-10 people on the planet who care. The rest either doesn't care or cares about the opposite, conserving and/or expanding natural suffering. I am not aware of anything that could reasonably change that. Reducing "existential risk" will of course increase wild animal suffering as well as factory farming, and future equivalents.

I agree that EA as a whole doesn't have coherent goals (I think many EAs already acknowledge that it's a shared set of tools rather than a shared set of values). But why are you so sure that "it's going to cause much more suffering than it prevents"?

-1
Ant_Colony
That was in reference to both humanity and the EA movement, but it's trivially true for the EA movement itself. Assuming they have any kind of directed impact whatsoever, most of them want to reduce extinction risk to get humanity to the stars. We all know what that means for the total amount of future suffering. And yes, there will be some additional "flourishing" or pleasure/happiness/wellbeing, but it will not be optimized. It will not outweigh all the torture-level suffering. People like Toby Ord may use happiness as a rationalization to cause more suffering, but most of them never actually endorse optimizing it. People in EA generally gain status by decrying the technically optimal solutions to this particular optimization problem. There are exceptions of course, like Michael Dickens above. But I'm not even convinced they're doing their own values a favor by endorsing the EA movement at this point.

Thanks a lot! I've made the correction you pointed out.

I'm not objecting to having moral uncertainty about animals. I'm objecting to treating animal ethics as if it were a matter of taste. EAs have rigorous standards of argument when it comes to valuations of humans, but when it comes to animals they often seem to shrug and say "It depends on how much you value them" rather than discussing how much we should value them.

I didn't intend to actually debate what the proper valuation should be. But FWIW, the attitude that talking about how we should value animals "is likely to be emotionally cha... (read more)

1
MichaelDickens
FWIW, I agree that there probably exist objective facts about how to value different animals relative to each other, and people who claim to value 1 hour of human suffering the same as 1000 hours of chicken suffering are just plain wrong. But it's pretty hard to convince people of this, so I try to avoid making arguments that rely on claiming high parity of value between humans and non-human animals. If you're trying to make an argument, you should avoid making assumptions that many readers will disagree with, because then you'll just lose people.

I take issue with the statement "it depends greatly on how much you value a human compared to a nonhuman animal". Similar things are often said by EAs in discussions of animal welfare. This makes it seem as if the value one places on nonhumans is a matter of taste, rather than a claim subject to rational argument. The statement should read "it depends greatly on how much we ought to value a human compared to a nonhuman".

Imagine if EAs went around saying "it depends on how much you value an African relative to an American"... (read more)

6
Lila
Assuming moral anti-realism, as many EAs do, people can rationally disagree over values to an almost unlimited degree. Some strict definitions of utilitarianism would require one to equally value animal and human suffering, discounting for some metric of consciousness (though I actually roughly agree with Brian Tomasik that calling something conscious is a value judgment, not an empirical claim). But many EAs aren't strict utilitarians. EAs can have strong opinions about how the message of EA should be presented. For example, I think EA should discourage valuing the life of an American 1000x that of a foreigner, or valuing animal suffering at 0. But nitpicking over subjective values seems counterproductive.
8
Peter Wildeford
I feel like wading into this debate is likely to be emotionally charged and counterproductive, but I think it is reasonable to have a good deal of "moral uncertainty" when it comes to doing interspecies comparisons, whereas there'd be much less certainty (though still some) when comparing between humans (e.g., is a pregnant person worth more? Is a healthy young person worth more than an 80-year old in a coma?). For example, one leading view would be that one chicken has equal worth to one human. Another view would be to discount the chicken by its brain size relative to humans, which would imply a value of 300 chickens per human. There are also many views in between and I'm uncertain which one to take. Sure, such moral calculus may seem very crude, but it does not judge the animal merely by species.

I'm not saying any experiment is necessarily useless, but if MFA is going to spend a bunch of resources on another study they should use methods that won't exaggerate effectiveness.

And it's not only that "one should attend to priors in interpretation" - one should specify priors beforehand and explicitly update conditional on the data.

Confidence intervals still don't incorporate prior information and so give undue weight to large effects.

2
CarlShulman
Sure, one should attend to priors in interpretation, but that doesn't make the experiment useless. If a pre-registered experiment reliably gives you a severalfold likelihood ratio, you can repeat it or scale it up and overcome significant prior skepticism (although limited by credence in hidden flaws).

I would be especially wary of conducting more studies if we plan on trying to "prove" or "disprove" the effectiveness of ads with so dubious a tool as null hypothesis significance tests.

Even if in a new study we were to reject the null hypothesis of no effect, this would arguably still be pretty weak evidence in favor of the effectiveness of ads.

1
CarlShulman
What are you worried about here? The same studies will give confidence intervals on effect sizes, which are actionable, and reliable significance at a given sample size indicates an effect of a given magnitude..

As prohibitions on methods of animal exploitation - rather than just regulations which allow those forms of exploitation to persist if they're more "humane" - I think these are different than typical welfare reforms. As I say in the post, this is the position taken by abolitionist-in-chief Gary Francione in Rain Without Thunder.

Of course the line between welfare reform and prohibition is murky. You could argue that these are not, in fact, prohibitions on the relevant form of exploitation - namely, raising animals to be killed for food. But in... (read more)

I haven't seen much on welfare reforms in these industries in particular. In the 90s Sweden required that foxes on fur farms be able to express their natural behaviors, but this made fur farming economically unviable and it ended altogether...so I'm not sure what that tells us. Other than that, animals used in fur farming and cosmetics testing are/were subject to general EU animal welfare laws, and laws concerning farm and experimental animals, respectively.

I think welfare having no effect on abolition is a reasonable conclusion. I just want to argue that it isn't obviously counterproductive on the basis of this historical evidence.

Thanks for the comments!

"...we have evidence that welfare reforms lead to more welfare reforms, which might suggest someday they will get us to something close to animal rights, but I think Gary Francione's historical argument that we have had welfare reforms for two centuries without significant actual improvements is a bit stronger...."

My point is that welfare reforms have led not only to more welfare reforms, but prohibitions as well. Even if we disqualify bans on battery cages, veal crates, and gestation crates as prohibitions, there are sti... (read more)

3
zdgroff
All good points, thanks. With the prohibitions on fur and cosmetic testing, do you know if they were preceded by welfare reforms in those respective industries or just in other industries? That could also be evidence for welfare reform leading to abolition but I don't know that it was the case. It seems likely to me that the effect of welfare reforms on abolition is likely zero, neither positive nor negative based on this evidence. Those are all good points about potential biases. Though note that incrementalism and welfarist are not the same. Abolitionists achieved geographical increments in city and state abolition and personal increments in manumission.
Load more